Test Report: Docker_macOS 15565

                    
                      3055562a73e3eb609a1971b4f703ef7d8b32cd43:2023-01-24:27570
                    
                

Test fail (14/306)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-211000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0124 09:41:50.942444    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:42:18.630066    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:42:45.076410    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.081898    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.093148    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.115245    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.157261    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.237964    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.398310    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.720519    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:46.361400    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:47.642116    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:50.203604    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:55.323997    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:43:05.565284    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:43:26.045287    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:44:07.006856    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-211000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.026979869s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-211000] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-211000 in cluster ingress-addon-legacy-211000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 09:40:07.589112    7953 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:40:07.589259    7953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:40:07.589265    7953 out.go:309] Setting ErrFile to fd 2...
	I0124 09:40:07.589269    7953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:40:07.589402    7953 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 09:40:07.589979    7953 out.go:303] Setting JSON to false
	I0124 09:40:07.608173    7953 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2382,"bootTime":1674579625,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 09:40:07.608273    7953 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 09:40:07.630675    7953 out.go:177] * [ingress-addon-legacy-211000] minikube v1.28.0 on Darwin 13.1
	I0124 09:40:07.652231    7953 notify.go:220] Checking for updates...
	I0124 09:40:07.652265    7953 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 09:40:07.674300    7953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 09:40:07.696093    7953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 09:40:07.717355    7953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 09:40:07.739214    7953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 09:40:07.760409    7953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 09:40:07.782477    7953 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 09:40:07.842779    7953 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 09:40:07.842933    7953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:40:07.984678    7953 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-24 17:40:07.89206901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:40:08.028340    7953 out.go:177] * Using the docker driver based on user configuration
	I0124 09:40:08.049494    7953 start.go:296] selected driver: docker
	I0124 09:40:08.049516    7953 start.go:840] validating driver "docker" against <nil>
	I0124 09:40:08.049532    7953 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 09:40:08.053351    7953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:40:08.194114    7953 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-24 17:40:08.103734687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:40:08.194247    7953 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0124 09:40:08.194389    7953 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0124 09:40:08.216222    7953 out.go:177] * Using Docker Desktop driver with root privileges
	I0124 09:40:08.237975    7953 cni.go:84] Creating CNI manager for ""
	I0124 09:40:08.238031    7953 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 09:40:08.238052    7953 start_flags.go:319] config:
	{Name:ingress-addon-legacy-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 09:40:08.280885    7953 out.go:177] * Starting control plane node ingress-addon-legacy-211000 in cluster ingress-addon-legacy-211000
	I0124 09:40:08.302009    7953 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 09:40:08.323864    7953 out.go:177] * Pulling base image ...
	I0124 09:40:08.365964    7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0124 09:40:08.365969    7953 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 09:40:08.421034    7953 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 09:40:08.421057    7953 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 09:40:08.438202    7953 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0124 09:40:08.438241    7953 cache.go:57] Caching tarball of preloaded images
	I0124 09:40:08.438631    7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0124 09:40:08.459891    7953 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0124 09:40:08.501922    7953 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:40:08.580811    7953 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0124 09:40:11.000938    7953 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:40:11.001138    7953 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:40:11.620772    7953 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0124 09:40:11.621029    7953 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/config.json ...
	I0124 09:40:11.621055    7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/config.json: {Name:mk608cc88daffca7698a234960f4d9ea5c3d5378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:11.621385    7953 cache.go:193] Successfully downloaded all kic artifacts
	I0124 09:40:11.621411    7953 start.go:364] acquiring machines lock for ingress-addon-legacy-211000: {Name:mkaa30950e8aec33011c28dbd6cc20c941a3c9b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 09:40:11.621531    7953 start.go:368] acquired machines lock for "ingress-addon-legacy-211000" in 113.671µs
	I0124 09:40:11.621553    7953 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 09:40:11.621661    7953 start.go:125] createHost starting for "" (driver="docker")
	I0124 09:40:11.665684    7953 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0124 09:40:11.666048    7953 start.go:159] libmachine.API.Create for "ingress-addon-legacy-211000" (driver="docker")
	I0124 09:40:11.666090    7953 client.go:168] LocalClient.Create starting
	I0124 09:40:11.666309    7953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem
	I0124 09:40:11.666390    7953 main.go:141] libmachine: Decoding PEM data...
	I0124 09:40:11.666421    7953 main.go:141] libmachine: Parsing certificate...
	I0124 09:40:11.666511    7953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem
	I0124 09:40:11.666576    7953 main.go:141] libmachine: Decoding PEM data...
	I0124 09:40:11.666595    7953 main.go:141] libmachine: Parsing certificate...
	I0124 09:40:11.667819    7953 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-211000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0124 09:40:11.724635    7953 cli_runner.go:211] docker network inspect ingress-addon-legacy-211000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0124 09:40:11.724758    7953 network_create.go:281] running [docker network inspect ingress-addon-legacy-211000] to gather additional debugging logs...
	I0124 09:40:11.724781    7953 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-211000
	W0124 09:40:11.778750    7953 cli_runner.go:211] docker network inspect ingress-addon-legacy-211000 returned with exit code 1
	I0124 09:40:11.778779    7953 network_create.go:284] error running [docker network inspect ingress-addon-legacy-211000]: docker network inspect ingress-addon-legacy-211000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-211000
	I0124 09:40:11.778794    7953 network_create.go:286] output of [docker network inspect ingress-addon-legacy-211000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-211000
	
	** /stderr **
	I0124 09:40:11.778888    7953 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 09:40:11.833038    7953 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00044bb00}
	I0124 09:40:11.833074    7953 network_create.go:123] attempt to create docker network ingress-addon-legacy-211000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0124 09:40:11.833146    7953 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 ingress-addon-legacy-211000
	I0124 09:40:11.919843    7953 network_create.go:107] docker network ingress-addon-legacy-211000 192.168.49.0/24 created
	I0124 09:40:11.919880    7953 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-211000" container
	I0124 09:40:11.920001    7953 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0124 09:40:11.974496    7953 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-211000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --label created_by.minikube.sigs.k8s.io=true
	I0124 09:40:12.031500    7953 oci.go:103] Successfully created a docker volume ingress-addon-legacy-211000
	I0124 09:40:12.031641    7953 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-211000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --entrypoint /usr/bin/test -v ingress-addon-legacy-211000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0124 09:40:12.489067    7953 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-211000
	I0124 09:40:12.489105    7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0124 09:40:12.489129    7953 kic.go:190] Starting extracting preloaded images to volume ...
	I0124 09:40:12.489232    7953 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-211000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0124 09:40:18.531166    7953 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-211000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (6.041943179s)
	I0124 09:40:18.531196    7953 kic.go:199] duration metric: took 6.042147 seconds to extract preloaded images to volume
	I0124 09:40:18.531335    7953 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0124 09:40:18.677155    7953 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-211000 --name ingress-addon-legacy-211000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --network ingress-addon-legacy-211000 --ip 192.168.49.2 --volume ingress-addon-legacy-211000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0124 09:40:19.037764    7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Running}}
	I0124 09:40:19.098389    7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
	I0124 09:40:19.164372    7953 cli_runner.go:164] Run: docker exec ingress-addon-legacy-211000 stat /var/lib/dpkg/alternatives/iptables
	I0124 09:40:19.285591    7953 oci.go:144] the created container "ingress-addon-legacy-211000" has a running status.
	I0124 09:40:19.285627    7953 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa...
	I0124 09:40:19.423271    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0124 09:40:19.423353    7953 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0124 09:40:19.529506    7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
	I0124 09:40:19.589624    7953 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0124 09:40:19.589645    7953 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-211000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0124 09:40:19.694047    7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
	I0124 09:40:19.752126    7953 machine.go:88] provisioning docker machine ...
	I0124 09:40:19.752165    7953 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-211000"
	I0124 09:40:19.752266    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:19.810362    7953 main.go:141] libmachine: Using SSH client type: native
	I0124 09:40:19.810571    7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50707 <nil> <nil>}
	I0124 09:40:19.810587    7953 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-211000 && echo "ingress-addon-legacy-211000" | sudo tee /etc/hostname
	I0124 09:40:19.954363    7953 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-211000
	
	I0124 09:40:19.954465    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:20.012604    7953 main.go:141] libmachine: Using SSH client type: native
	I0124 09:40:20.012761    7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50707 <nil> <nil>}
	I0124 09:40:20.012782    7953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-211000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-211000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-211000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 09:40:20.147263    7953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 09:40:20.147287    7953 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 09:40:20.147311    7953 ubuntu.go:177] setting up certificates
	I0124 09:40:20.147317    7953 provision.go:83] configureAuth start
	I0124 09:40:20.147393    7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211000
	I0124 09:40:20.206612    7953 provision.go:138] copyHostCerts
	I0124 09:40:20.206664    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 09:40:20.206721    7953 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 09:40:20.206727    7953 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 09:40:20.206863    7953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 09:40:20.207026    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 09:40:20.207061    7953 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 09:40:20.207066    7953 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 09:40:20.207143    7953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 09:40:20.207268    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 09:40:20.207303    7953 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 09:40:20.207307    7953 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 09:40:20.207375    7953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 09:40:20.207496    7953 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-211000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-211000]
	I0124 09:40:20.461765    7953 provision.go:172] copyRemoteCerts
	I0124 09:40:20.461823    7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 09:40:20.461878    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:20.521404    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
	I0124 09:40:20.613532    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0124 09:40:20.613620    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0124 09:40:20.631074    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0124 09:40:20.631146    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 09:40:20.648126    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0124 09:40:20.648248    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0124 09:40:20.665846    7953 provision.go:86] duration metric: configureAuth took 518.523623ms
	I0124 09:40:20.665863    7953 ubuntu.go:193] setting minikube options for container-runtime
	I0124 09:40:20.666012    7953 config.go:180] Loaded profile config "ingress-addon-legacy-211000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0124 09:40:20.666072    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:20.724005    7953 main.go:141] libmachine: Using SSH client type: native
	I0124 09:40:20.724171    7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50707 <nil> <nil>}
	I0124 09:40:20.724188    7953 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 09:40:20.861622    7953 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 09:40:20.861640    7953 ubuntu.go:71] root file system type: overlay
	I0124 09:40:20.861804    7953 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 09:40:20.861885    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:20.919603    7953 main.go:141] libmachine: Using SSH client type: native
	I0124 09:40:20.919774    7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50707 <nil> <nil>}
	I0124 09:40:20.919823    7953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 09:40:21.063158    7953 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 09:40:21.063289    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:21.122163    7953 main.go:141] libmachine: Using SSH client type: native
	I0124 09:40:21.122308    7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50707 <nil> <nil>}
	I0124 09:40:21.122323    7953 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 09:40:21.726119    7953 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 17:40:21.061403580 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0124 09:40:21.726146    7953 machine.go:91] provisioned docker machine in 1.97402489s
	I0124 09:40:21.726152    7953 client.go:171] LocalClient.Create took 10.060173537s
	I0124 09:40:21.726169    7953 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-211000" took 10.060240414s
	I0124 09:40:21.726180    7953 start.go:300] post-start starting for "ingress-addon-legacy-211000" (driver="docker")
	I0124 09:40:21.726186    7953 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 09:40:21.726272    7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 09:40:21.726343    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:21.788539    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
	I0124 09:40:21.885796    7953 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 09:40:21.889290    7953 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 09:40:21.889306    7953 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 09:40:21.889318    7953 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 09:40:21.889326    7953 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 09:40:21.889335    7953 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 09:40:21.889430    7953 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 09:40:21.889607    7953 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 09:40:21.889614    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> /etc/ssl/certs/43552.pem
	I0124 09:40:21.889804    7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 09:40:21.896936    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 09:40:21.914317    7953 start.go:303] post-start completed in 188.129159ms
	I0124 09:40:21.914914    7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211000
	I0124 09:40:21.972516    7953 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/config.json ...
	I0124 09:40:21.972956    7953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 09:40:21.973038    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:22.032724    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
	I0124 09:40:22.124356    7953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 09:40:22.129022    7953 start.go:128] duration metric: createHost completed in 10.507476295s
	I0124 09:40:22.129047    7953 start.go:83] releasing machines lock for "ingress-addon-legacy-211000", held for 10.507627897s
	I0124 09:40:22.129136    7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211000
	I0124 09:40:22.185457    7953 ssh_runner.go:195] Run: cat /version.json
	I0124 09:40:22.185486    7953 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0124 09:40:22.185529    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:22.185552    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:22.245975    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
	I0124 09:40:22.246182    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
	I0124 09:40:22.334104    7953 ssh_runner.go:195] Run: systemctl --version
	I0124 09:40:22.539612    7953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 09:40:22.544901    7953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 09:40:22.564820    7953 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 09:40:22.564909    7953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0124 09:40:22.578784    7953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0124 09:40:22.586425    7953 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0124 09:40:22.586438    7953 start.go:472] detecting cgroup driver to use...
	I0124 09:40:22.586455    7953 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 09:40:22.586535    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 09:40:22.610283    7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0124 09:40:22.621185    7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 09:40:22.629441    7953 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 09:40:22.629499    7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 09:40:22.637825    7953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 09:40:22.646425    7953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 09:40:22.654894    7953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 09:40:22.663705    7953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 09:40:22.671440    7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 09:40:22.679811    7953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 09:40:22.687056    7953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 09:40:22.694360    7953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 09:40:22.762707    7953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 09:40:22.834083    7953 start.go:472] detecting cgroup driver to use...
	I0124 09:40:22.834106    7953 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 09:40:22.834182    7953 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 09:40:22.845592    7953 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 09:40:22.845666    7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 09:40:22.856391    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 09:40:22.870845    7953 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 09:40:22.966831    7953 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 09:40:23.066074    7953 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 09:40:23.066107    7953 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 09:40:23.081593    7953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 09:40:23.172477    7953 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 09:40:23.382709    7953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 09:40:23.412181    7953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 09:40:23.463408    7953 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
	I0124 09:40:23.463583    7953 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-211000 dig +short host.docker.internal
	I0124 09:40:23.579388    7953 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 09:40:23.579535    7953 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 09:40:23.584154    7953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 09:40:23.593905    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:40:23.651905    7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0124 09:40:23.651994    7953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 09:40:23.677865    7953 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0124 09:40:23.677884    7953 docker.go:560] Images already preloaded, skipping extraction
	I0124 09:40:23.677975    7953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 09:40:23.701976    7953 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0124 09:40:23.701995    7953 cache_images.go:84] Images are preloaded, skipping loading
	I0124 09:40:23.702081    7953 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 09:40:23.773109    7953 cni.go:84] Creating CNI manager for ""
	I0124 09:40:23.773127    7953 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 09:40:23.773147    7953 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 09:40:23.773171    7953 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-211000 NodeName:ingress-addon-legacy-211000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 09:40:23.773400    7953 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-211000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 09:40:23.773522    7953 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-211000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 09:40:23.773586    7953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0124 09:40:23.781801    7953 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 09:40:23.781878    7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 09:40:23.789165    7953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0124 09:40:23.802853    7953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0124 09:40:23.816147    7953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0124 09:40:23.829667    7953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0124 09:40:23.833557    7953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 09:40:23.843928    7953 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000 for IP: 192.168.49.2
	I0124 09:40:23.843947    7953 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:23.844115    7953 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 09:40:23.844181    7953 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 09:40:23.844226    7953 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.key
	I0124 09:40:23.844241    7953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.crt with IP's: []
	I0124 09:40:23.956000    7953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.crt ...
	I0124 09:40:23.956010    7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.crt: {Name:mkcc66a6a579ed07c5d0fe8005d5efbf327e4407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:23.956284    7953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.key ...
	I0124 09:40:23.956292    7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.key: {Name:mk9ef8aa5f6d0f635158bc9ada91e0b32146eefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:23.956472    7953 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2
	I0124 09:40:23.956486    7953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0124 09:40:24.272400    7953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2 ...
	I0124 09:40:24.272414    7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2: {Name:mkd5060f920228c8deffcfba869657319c9157ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:24.272708    7953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2 ...
	I0124 09:40:24.272716    7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2: {Name:mk99b57c0bbc07148afed701e08444e3a30d05da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:24.272904    7953 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt
	I0124 09:40:24.273074    7953 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key
	I0124 09:40:24.273231    7953 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key
	I0124 09:40:24.273246    7953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt with IP's: []
	I0124 09:40:24.565401    7953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt ...
	I0124 09:40:24.565416    7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt: {Name:mkfcb5d6eb5e9b4779f2ecab1dce3bf7bbea2e82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:24.565720    7953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key ...
	I0124 09:40:24.565730    7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key: {Name:mk8b3ddc900a601a5b725be79f499bdb29e9666f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:40:24.565956    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0124 09:40:24.565984    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0124 09:40:24.566003    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0124 09:40:24.566040    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0124 09:40:24.566092    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0124 09:40:24.566127    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0124 09:40:24.566143    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0124 09:40:24.566159    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0124 09:40:24.566265    7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 09:40:24.566309    7953 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 09:40:24.566319    7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 09:40:24.566418    7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 09:40:24.566448    7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 09:40:24.566517    7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 09:40:24.566596    7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 09:40:24.566628    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem -> /usr/share/ca-certificates/4355.pem
	I0124 09:40:24.566676    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> /usr/share/ca-certificates/43552.pem
	I0124 09:40:24.566694    7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0124 09:40:24.567244    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 09:40:24.586499    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0124 09:40:24.603788    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 09:40:24.621086    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0124 09:40:24.638240    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 09:40:24.655283    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 09:40:24.672762    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 09:40:24.689940    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 09:40:24.707549    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 09:40:24.725148    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 09:40:24.742483    7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 09:40:24.760070    7953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 09:40:24.773060    7953 ssh_runner.go:195] Run: openssl version
	I0124 09:40:24.778703    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 09:40:24.786939    7953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 09:40:24.791068    7953 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 09:40:24.791116    7953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 09:40:24.796423    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 09:40:24.804506    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 09:40:24.813127    7953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 09:40:24.817338    7953 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 09:40:24.817397    7953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 09:40:24.822826    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 09:40:24.831220    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 09:40:24.839644    7953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 09:40:24.843676    7953 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 09:40:24.843723    7953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 09:40:24.849443    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 09:40:24.857924    7953 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 09:40:24.858031    7953 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 09:40:24.880036    7953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 09:40:24.888820    7953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 09:40:24.896337    7953 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 09:40:24.896404    7953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 09:40:24.904093    7953 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 09:40:24.904124    7953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 09:40:24.952248    7953 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0124 09:40:24.952307    7953 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 09:40:25.254148    7953 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 09:40:25.254342    7953 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 09:40:25.254442    7953 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 09:40:25.478129    7953 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 09:40:25.478589    7953 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 09:40:25.478639    7953 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0124 09:40:25.552045    7953 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 09:40:25.572706    7953 out.go:204]   - Generating certificates and keys ...
	I0124 09:40:25.572800    7953 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 09:40:25.572895    7953 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 09:40:25.685263    7953 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0124 09:40:25.842892    7953 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0124 09:40:26.100753    7953 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0124 09:40:26.322732    7953 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0124 09:40:26.386531    7953 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0124 09:40:26.386668    7953 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0124 09:40:26.475446    7953 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0124 09:40:26.475541    7953 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0124 09:40:26.544290    7953 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0124 09:40:26.753569    7953 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0124 09:40:26.902294    7953 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0124 09:40:26.902355    7953 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 09:40:27.026203    7953 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 09:40:27.124270    7953 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 09:40:27.185409    7953 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 09:40:27.331587    7953 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 09:40:27.332241    7953 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 09:40:27.373548    7953 out.go:204]   - Booting up control plane ...
	I0124 09:40:27.373764    7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 09:40:27.373908    7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 09:40:27.374066    7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 09:40:27.374209    7953 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 09:40:27.374480    7953 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 09:41:07.342039    7953 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 09:41:07.342973    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:41:07.343203    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:41:12.344962    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:41:12.345211    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:41:22.347108    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:41:22.347361    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:41:42.347313    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:41:42.347462    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:42:22.349128    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:42:22.349446    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:42:22.349462    7953 kubeadm.go:322] 
	I0124 09:42:22.349508    7953 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0124 09:42:22.349548    7953 kubeadm.go:322] 		timed out waiting for the condition
	I0124 09:42:22.349554    7953 kubeadm.go:322] 
	I0124 09:42:22.349622    7953 kubeadm.go:322] 	This error is likely caused by:
	I0124 09:42:22.349679    7953 kubeadm.go:322] 		- The kubelet is not running
	I0124 09:42:22.349875    7953 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 09:42:22.349896    7953 kubeadm.go:322] 
	I0124 09:42:22.350053    7953 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 09:42:22.350103    7953 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0124 09:42:22.350142    7953 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0124 09:42:22.350150    7953 kubeadm.go:322] 
	I0124 09:42:22.350315    7953 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 09:42:22.350420    7953 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0124 09:42:22.350436    7953 kubeadm.go:322] 
	I0124 09:42:22.350532    7953 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0124 09:42:22.350594    7953 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0124 09:42:22.350658    7953 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0124 09:42:22.350689    7953 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0124 09:42:22.350697    7953 kubeadm.go:322] 
	I0124 09:42:22.354217    7953 kubeadm.go:322] W0124 17:40:24.951424    1167 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0124 09:42:22.354373    7953 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 09:42:22.354450    7953 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 09:42:22.354564    7953 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
	I0124 09:42:22.354652    7953 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 09:42:22.354758    7953 kubeadm.go:322] W0124 17:40:27.337491    1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0124 09:42:22.354856    7953 kubeadm.go:322] W0124 17:40:27.338480    1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0124 09:42:22.354927    7953 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 09:42:22.354993    7953 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0124 09:42:22.355189    7953 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0124 17:40:24.951424    1167 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0124 17:40:27.337491    1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0124 17:40:27.338480    1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0124 17:40:24.951424    1167 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0124 17:40:27.337491    1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0124 17:40:27.338480    1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0124 09:42:22.355235    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0124 09:42:22.778485    7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 09:42:22.788264    7953 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 09:42:22.788329    7953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 09:42:22.795632    7953 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 09:42:22.795655    7953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 09:42:22.842330    7953 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0124 09:42:22.842386    7953 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 09:42:23.136443    7953 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 09:42:23.136543    7953 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 09:42:23.136629    7953 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 09:42:23.356886    7953 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 09:42:23.357325    7953 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 09:42:23.357360    7953 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0124 09:42:23.428001    7953 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 09:42:23.449498    7953 out.go:204]   - Generating certificates and keys ...
	I0124 09:42:23.449583    7953 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 09:42:23.449647    7953 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 09:42:23.449750    7953 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 09:42:23.449817    7953 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 09:42:23.449869    7953 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 09:42:23.449912    7953 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 09:42:23.450003    7953 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 09:42:23.450062    7953 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 09:42:23.450126    7953 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 09:42:23.450227    7953 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 09:42:23.450267    7953 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 09:42:23.450390    7953 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 09:42:23.560138    7953 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 09:42:23.634333    7953 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 09:42:23.775054    7953 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 09:42:23.968659    7953 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 09:42:23.969308    7953 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 09:42:23.991315    7953 out.go:204]   - Booting up control plane ...
	I0124 09:42:23.991471    7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 09:42:23.991636    7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 09:42:23.991790    7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 09:42:23.991977    7953 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 09:42:23.992266    7953 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 09:43:03.977571    7953 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 09:43:03.978588    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:43:03.978971    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:43:08.980161    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:43:08.980383    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:43:18.981080    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:43:18.981234    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:43:38.983193    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:43:38.983428    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:44:18.984653    7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 09:44:18.984884    7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 09:44:18.984895    7953 kubeadm.go:322] 
	I0124 09:44:18.984981    7953 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0124 09:44:18.985038    7953 kubeadm.go:322] 		timed out waiting for the condition
	I0124 09:44:18.985049    7953 kubeadm.go:322] 
	I0124 09:44:18.985096    7953 kubeadm.go:322] 	This error is likely caused by:
	I0124 09:44:18.985144    7953 kubeadm.go:322] 		- The kubelet is not running
	I0124 09:44:18.985274    7953 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 09:44:18.985289    7953 kubeadm.go:322] 
	I0124 09:44:18.985408    7953 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 09:44:18.985446    7953 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0124 09:44:18.985491    7953 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0124 09:44:18.985506    7953 kubeadm.go:322] 
	I0124 09:44:18.985619    7953 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 09:44:18.985717    7953 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0124 09:44:18.985728    7953 kubeadm.go:322] 
	I0124 09:44:18.985841    7953 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0124 09:44:18.985909    7953 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0124 09:44:18.985983    7953 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0124 09:44:18.986027    7953 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0124 09:44:18.986043    7953 kubeadm.go:322] 
	I0124 09:44:18.988135    7953 kubeadm.go:322] W0124 17:42:22.841893    3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0124 09:44:18.988292    7953 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 09:44:18.988356    7953 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 09:44:18.988474    7953 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
	I0124 09:44:18.988560    7953 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 09:44:18.988656    7953 kubeadm.go:322] W0124 17:42:23.972891    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0124 09:44:18.988760    7953 kubeadm.go:322] W0124 17:42:23.973793    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0124 09:44:18.988847    7953 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 09:44:18.988915    7953 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0124 09:44:18.988937    7953 kubeadm.go:403] StartCluster complete in 3m54.133734717s
	I0124 09:44:18.989024    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 09:44:19.011615    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.011630    7953 logs.go:281] No container was found matching "kube-apiserver"
	I0124 09:44:19.011698    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 09:44:19.034868    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.034881    7953 logs.go:281] No container was found matching "etcd"
	I0124 09:44:19.034957    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 09:44:19.058323    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.058338    7953 logs.go:281] No container was found matching "coredns"
	I0124 09:44:19.058414    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 09:44:19.080195    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.080213    7953 logs.go:281] No container was found matching "kube-scheduler"
	I0124 09:44:19.080288    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 09:44:19.104179    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.104192    7953 logs.go:281] No container was found matching "kube-proxy"
	I0124 09:44:19.104261    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 09:44:19.126233    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.126246    7953 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 09:44:19.126317    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 09:44:19.148423    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.148438    7953 logs.go:281] No container was found matching "storage-provisioner"
	I0124 09:44:19.148508    7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 09:44:19.171809    7953 logs.go:279] 0 containers: []
	W0124 09:44:19.171823    7953 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 09:44:19.171836    7953 logs.go:124] Gathering logs for kubelet ...
	I0124 09:44:19.171843    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 09:44:19.209674    7953 logs.go:124] Gathering logs for dmesg ...
	I0124 09:44:19.209692    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 09:44:19.221759    7953 logs.go:124] Gathering logs for describe nodes ...
	I0124 09:44:19.221773    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 09:44:19.276804    7953 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 09:44:19.276816    7953 logs.go:124] Gathering logs for Docker ...
	I0124 09:44:19.276824    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 09:44:19.293935    7953 logs.go:124] Gathering logs for container status ...
	I0124 09:44:19.293950    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 09:44:21.366814    7953 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.072873986s)
	W0124 09:44:21.366942    7953 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0124 17:42:22.841893    3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0124 17:42:23.972891    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0124 17:42:23.973793    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0124 09:44:21.366959    7953 out.go:239] * 
	* 
	W0124 09:44:21.367080    7953 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0124 17:42:22.841893    3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0124 17:42:23.972891    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0124 17:42:23.973793    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0124 17:42:22.841893    3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0124 17:42:23.972891    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0124 17:42:23.973793    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 09:44:21.367094    7953 out.go:239] * 
	* 
	W0124 09:44:21.367710    7953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 09:44:21.430327    7953 out.go:177] 
	W0124 09:44:21.472586    7953 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0124 17:42:22.841893    3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0124 17:42:23.972891    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0124 17:42:23.973793    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0124 17:42:22.841893    3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0124 17:42:23.972891    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0124 17:42:23.973793    3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 09:44:21.472723    7953 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0124 09:44:21.472809    7953 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0124 09:44:21.494226    7953 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-211000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-211000 addons enable ingress --alsologtostderr -v=5
E0124 09:45:28.927552    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-211000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.162662683s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 09:44:21.648263    8273 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:44:21.648587    8273 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:44:21.648593    8273 out.go:309] Setting ErrFile to fd 2...
	I0124 09:44:21.648597    8273 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:44:21.648702    8273 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 09:44:21.670897    8273 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0124 09:44:21.693212    8273 config.go:180] Loaded profile config "ingress-addon-legacy-211000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0124 09:44:21.693243    8273 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-211000"
	I0124 09:44:21.693258    8273 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-211000"
	I0124 09:44:21.693839    8273 host.go:66] Checking if "ingress-addon-legacy-211000" exists ...
	I0124 09:44:21.694819    8273 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
	I0124 09:44:21.772584    8273 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0124 09:44:21.793642    8273 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0124 09:44:21.835344    8273 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0124 09:44:21.856754    8273 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0124 09:44:21.878948    8273 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0124 09:44:21.878987    8273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0124 09:44:21.879155    8273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:44:21.935665    8273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
	I0124 09:44:22.037093    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:22.088865    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:22.088894    8273 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:22.366510    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:22.419832    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:22.419847    8273 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:22.960788    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:23.015507    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:23.015521    8273 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:23.671067    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:23.726331    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:23.726348    8273 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:24.517834    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:24.570376    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:24.570395    8273 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:25.742223    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:25.796999    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:25.797015    8273 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:28.051851    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:28.105580    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:28.105594    8273 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:29.716931    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:29.770556    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:29.770571    8273 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:32.576298    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:32.630989    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:32.631005    8273 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:36.458231    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:36.511379    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:36.511394    8273 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:44.209534    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:44.263900    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:44.263916    8273 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:58.900668    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:44:58.954513    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:44:58.954528    8273 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:27.363124    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:45:27.416651    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:27.416665    8273 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:50.586880    8273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0124 09:45:50.640791    8273 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:50.640826    8273 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-211000"
	I0124 09:45:50.662392    8273 out.go:177] * Verifying ingress addon...
	I0124 09:45:50.684785    8273 out.go:177] 
	W0124 09:45:50.706674    8273 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-211000" does not exist: client config: context "ingress-addon-legacy-211000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-211000" does not exist: client config: context "ingress-addon-legacy-211000" does not exist]
	W0124 09:45:50.706703    8273 out.go:239] * 
	* 
	W0124 09:45:50.710105    8273 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 09:45:50.731143    8273 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-211000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-211000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1",
	        "Created": "2023-01-24T17:40:18.731720215Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T17:40:19.030297941Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/hosts",
	        "LogPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1-json.log",
	        "Name": "/ingress-addon-legacy-211000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-211000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-211000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-211000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-211000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-211000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-211000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-211000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a69b2789021e4126a3c29270fc9375d483cc4b68d75c11e5486c533532a5d18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50707"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50708"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50704"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50705"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50706"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3a69b2789021",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-211000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cfe970184a06",
	                        "ingress-addon-legacy-211000"
	                    ],
	                    "NetworkID": "b6da5fcdcd3b940943ea12439d8f468a4a42b9cec21b54dfa9bba8f249cfb463",
	                    "EndpointID": "0728e9e15998681a2465de7ee833df8f2d88bfc8e18f356b5fb06aefe38f1132",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-211000 -n ingress-addon-legacy-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-211000 -n ingress-addon-legacy-211000: exit status 6 (405.936482ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 09:45:51.208404    8359 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-211000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-211000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-211000 addons enable ingress-dns --alsologtostderr -v=5
E0124 09:46:50.939014    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-211000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.076912559s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 09:45:51.274406    8369 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:45:51.274735    8369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:45:51.274741    8369 out.go:309] Setting ErrFile to fd 2...
	I0124 09:45:51.274745    8369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:45:51.274859    8369 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 09:45:51.297041    8369 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0124 09:45:51.319118    8369 config.go:180] Loaded profile config "ingress-addon-legacy-211000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0124 09:45:51.319156    8369 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-211000"
	I0124 09:45:51.319167    8369 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-211000"
	I0124 09:45:51.319690    8369 host.go:66] Checking if "ingress-addon-legacy-211000" exists ...
	I0124 09:45:51.320738    8369 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
	I0124 09:45:51.400021    8369 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0124 09:45:51.422097    8369 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0124 09:45:51.443775    8369 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0124 09:45:51.443814    8369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0124 09:45:51.443973    8369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
	I0124 09:45:51.501138    8369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
	I0124 09:45:51.601037    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:51.650887    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:51.650910    8369 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:51.929311    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:51.983542    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:51.983561    8369 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:52.526088    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:52.580090    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:52.580104    8369 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:53.237150    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:53.294221    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:53.294256    8369 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:54.087295    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:54.140452    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:54.140469    8369 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:55.311022    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:55.362463    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:55.362479    8369 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:57.617164    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:57.670772    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:57.670787    8369 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:59.282632    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:45:59.336916    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:45:59.336931    8369 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:02.143591    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:46:02.196840    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:02.196854    8369 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:06.022849    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:46:06.076356    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:06.076372    8369 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:13.774172    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:46:13.827728    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:13.827743    8369 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:28.463810    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:46:28.516217    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:28.516232    8369 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:56.924865    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:46:56.978582    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:46:56.978602    8369 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:47:20.148856    8369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0124 09:47:20.203265    8369 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0124 09:47:20.225207    8369 out.go:177] 
	W0124 09:47:20.246397    8369 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0124 09:47:20.246424    8369 out.go:239] * 
	* 
	W0124 09:47:20.250231    8369 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 09:47:20.272214    8369 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-211000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-211000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1",
	        "Created": "2023-01-24T17:40:18.731720215Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T17:40:19.030297941Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/hosts",
	        "LogPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1-json.log",
	        "Name": "/ingress-addon-legacy-211000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-211000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-211000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-211000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-211000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-211000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-211000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-211000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a69b2789021e4126a3c29270fc9375d483cc4b68d75c11e5486c533532a5d18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50707"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50708"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50704"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50705"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50706"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3a69b2789021",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-211000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cfe970184a06",
	                        "ingress-addon-legacy-211000"
	                    ],
	                    "NetworkID": "b6da5fcdcd3b940943ea12439d8f468a4a42b9cec21b54dfa9bba8f249cfb463",
	                    "EndpointID": "0728e9e15998681a2465de7ee833df8f2d88bfc8e18f356b5fb06aefe38f1132",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-211000 -n ingress-addon-legacy-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-211000 -n ingress-addon-legacy-211000: exit status 6 (459.900738ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 09:47:20.803622    8463 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-211000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-211000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-211000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-211000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1",
	        "Created": "2023-01-24T17:40:18.731720215Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T17:40:19.030297941Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/hosts",
	        "LogPath": "/var/lib/docker/containers/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1/cfe970184a061ffcc1603243c206f21c554376a6b2d9e52363619cdbf1e2fab1-json.log",
	        "Name": "/ingress-addon-legacy-211000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-211000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-211000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/daf10bc517a9240416942ab9765ce673a6c75c51f84dd56f7efe0f93465e7372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-211000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-211000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-211000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-211000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-211000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a69b2789021e4126a3c29270fc9375d483cc4b68d75c11e5486c533532a5d18",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50707"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50708"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50704"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50705"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50706"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3a69b2789021",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-211000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cfe970184a06",
	                        "ingress-addon-legacy-211000"
	                    ],
	                    "NetworkID": "b6da5fcdcd3b940943ea12439d8f468a4a42b9cec21b54dfa9bba8f249cfb463",
	                    "EndpointID": "0728e9e15998681a2465de7ee833df8f2d88bfc8e18f356b5fb06aefe38f1132",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-211000 -n ingress-addon-legacy-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-211000 -n ingress-addon-legacy-211000: exit status 6 (395.26764ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 09:47:21.256185    8475 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-211000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-211000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3740583452.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=docker 
E0124 10:12:45.063736    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 10:12:53.581947    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:53.587206    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:53.597986    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:53.618031    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:53.659128    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:53.740016    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:53.900208    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:54.220814    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:54.860969    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:56.141187    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:12:58.702026    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:13:03.822869    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3740583452.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=docker : exit status 70 (51.67723189s)

                                                
                                                
-- stdout --
	! [running-upgrade-847000] minikube v1.9.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig652784490
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:12:45.045575226 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-847000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:13:04.292577606 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-847000", then "minikube start -p running-upgrade-847000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 305.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 532.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:13:04.292577606 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3740583452.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3740583452.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=docker : exit status 70 (4.328504086s)

                                                
                                                
-- stdout --
	* [running-upgrade-847000] minikube v1.9.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig926526605
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-847000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3740583452.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=docker 
E0124 10:13:14.062848    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3740583452.exe start -p running-upgrade-847000 --memory=2200 --vm-driver=docker : exit status 70 (4.301351485s)

                                                
                                                
-- stdout --
	* [running-upgrade-847000] minikube v1.9.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1512942726
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-847000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-01-24 10:13:17.772822 -0800 PST m=+2755.797622639
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-847000
helpers_test.go:235: (dbg) docker inspect running-upgrade-847000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fbaaac4ea5f2d125a7805af3a94a219da00297500d555e9972b7ca46c32b5ff",
	        "Created": "2023-01-24T18:12:53.268156723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:12:53.495601249Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/7fbaaac4ea5f2d125a7805af3a94a219da00297500d555e9972b7ca46c32b5ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fbaaac4ea5f2d125a7805af3a94a219da00297500d555e9972b7ca46c32b5ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fbaaac4ea5f2d125a7805af3a94a219da00297500d555e9972b7ca46c32b5ff/hosts",
	        "LogPath": "/var/lib/docker/containers/7fbaaac4ea5f2d125a7805af3a94a219da00297500d555e9972b7ca46c32b5ff/7fbaaac4ea5f2d125a7805af3a94a219da00297500d555e9972b7ca46c32b5ff-json.log",
	        "Name": "/running-upgrade-847000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-847000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dc45ca799bfb8f567520825a7f2e5b54099671180134a200538ef9234bcea4e1-init/diff:/var/lib/docker/overlay2/ca806d60e032750fb69c7badf9f4997738a2b81bfb5912b54cd5771a42db76fb/diff:/var/lib/docker/overlay2/f847d4e750bb400c91b1e043964825c29b0b02878265fe3437ace783c0712621/diff:/var/lib/docker/overlay2/4c97a08be5febfd68e8a30db90d194f25241e8ad94e921091bca8a86e18f4020/diff:/var/lib/docker/overlay2/f16e66080f2657efe212351eca7691357c2c35eed6f7d10b112bdb808bae64b2/diff:/var/lib/docker/overlay2/29f6572e606a68090178ad0b8c1d4a153d4a0a3e98998b3280dde542be76d182/diff:/var/lib/docker/overlay2/0b174254b71da2f3574dfd9bc32cede212a40e18450b398a31aad42a33a1c7f5/diff:/var/lib/docker/overlay2/0b69634403c40116f7e58cb5aeba20851b0d7b04ea854ca408253a60195221b8/diff:/var/lib/docker/overlay2/5f290bcce646f39d9d36f5b59646d810b99e6b181a202c5cca8de134766409d8/diff:/var/lib/docker/overlay2/1892803dde720d65fe83ad06603b2152947fb8e51498cfa60b6165818b4afb8a/diff:/var/lib/docker/overlay2/615000
eef3f5ae8d73ac8452093b02982e02ec58d986eaaa5b0735f93e7b6c5a/diff:/var/lib/docker/overlay2/1a0653b9c57e0a0914a73ebec708611bf6114e3b76e0900f4b3382f0271dcb64/diff:/var/lib/docker/overlay2/9827d070845e9b92e8743a6bd04853deef8be0735f2db07b295da62f705b5676/diff:/var/lib/docker/overlay2/ecfee87b2cee453136254a0bfcb04c67d2f5ac08551c945572ccd88e4c59dba6/diff:/var/lib/docker/overlay2/0fbcfbbd0ee3907cd6f3f2e6f5b91a767510ed7084ded0baf9bc0bb8434d29ad/diff:/var/lib/docker/overlay2/4eb02c490590e7145551663a67911cffb680684879f2f62fb9dd2f736dac3b28/diff:/var/lib/docker/overlay2/f4822dbc23ec4b5e78792134884b0efe65f2ebdacbe3ca11a3fe9979bdd15a7f/diff:/var/lib/docker/overlay2/f66796a2193c86e4e1f981689f0fd89b72dc677f64b031bcaf1af0cdea18d512/diff:/var/lib/docker/overlay2/5323d5dd426ededf38f055e25b41543a6804425414f034f9a0bf5773a74628dc/diff:/var/lib/docker/overlay2/365d84259e2bdad489386d4231d1a2e1f448e81671be77cb2fc783044848db81/diff:/var/lib/docker/overlay2/7b3dda520b15efcd08159c354a046f27e7983dc76b79c0a66fefdb4e42fdf84e/diff:/var/lib/d
ocker/overlay2/6dba9057eb22747b9d81dd7dfa015221acb95ece58e002f2e5d44ac4530c3d5f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc45ca799bfb8f567520825a7f2e5b54099671180134a200538ef9234bcea4e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc45ca799bfb8f567520825a7f2e5b54099671180134a200538ef9234bcea4e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc45ca799bfb8f567520825a7f2e5b54099671180134a200538ef9234bcea4e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-847000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-847000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-847000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-847000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-847000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ca50775d51908de18f08c7baf76729d9983c47b47c2c6ff81d53297524854fd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52942"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52943"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52944"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ca50775d519",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "750dff677bee5e581fe860def9348bb467adccead3cb43f76ca3deddf8b3fd8d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "996de8931b3d63a4fb143a9391d4bc914f95c26bea12eda8578a4dc9773b702e",
	                    "EndpointID": "750dff677bee5e581fe860def9348bb467adccead3cb43f76ca3deddf8b3fd8d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-847000 -n running-upgrade-847000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-847000 -n running-upgrade-847000: exit status 6 (383.608806ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:13:18.203090   17551 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-847000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-847000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-847000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-847000: (2.336506791s)
--- FAIL: TestRunningBinaryUpgrade (66.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (345.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0124 10:14:15.612881    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.394964795s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-582000] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-582000 in cluster kubernetes-upgrade-582000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 10:14:12.223037   17943 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:14:12.223203   17943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:14:12.223209   17943 out.go:309] Setting ErrFile to fd 2...
	I0124 10:14:12.223213   17943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:14:12.223343   17943 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:14:12.223844   17943 out.go:303] Setting JSON to false
	I0124 10:14:12.242098   17943 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4427,"bootTime":1674579625,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 10:14:12.242176   17943 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 10:14:12.264209   17943 out.go:177] * [kubernetes-upgrade-582000] minikube v1.28.0 on Darwin 13.1
	I0124 10:14:12.285135   17943 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 10:14:12.285130   17943 notify.go:220] Checking for updates...
	I0124 10:14:12.306216   17943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:14:12.327971   17943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 10:14:12.349061   17943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 10:14:12.370303   17943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 10:14:12.392014   17943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 10:14:12.413980   17943 config.go:180] Loaded profile config "cert-expiration-602000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:14:12.414092   17943 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 10:14:12.473986   17943 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 10:14:12.474109   17943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:14:12.618394   17943 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:14:12.415026144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:14:12.661204   17943 out.go:177] * Using the docker driver based on user configuration
	I0124 10:14:12.682328   17943 start.go:296] selected driver: docker
	I0124 10:14:12.682342   17943 start.go:840] validating driver "docker" against <nil>
	I0124 10:14:12.682354   17943 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 10:14:12.684788   17943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:14:12.825184   17943 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:14:12.623909275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:14:12.825300   17943 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0124 10:14:12.825442   17943 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0124 10:14:12.848405   17943 out.go:177] * Using Docker Desktop driver with root privileges
	I0124 10:14:12.869156   17943 cni.go:84] Creating CNI manager for ""
	I0124 10:14:12.869221   17943 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 10:14:12.869239   17943 start_flags.go:319] config:
	{Name:kubernetes-upgrade-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-582000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:14:12.901306   17943 out.go:177] * Starting control plane node kubernetes-upgrade-582000 in cluster kubernetes-upgrade-582000
	I0124 10:14:12.923093   17943 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 10:14:12.960314   17943 out.go:177] * Pulling base image ...
	I0124 10:14:13.002338   17943 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:14:13.002328   17943 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 10:14:13.002450   17943 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0124 10:14:13.002485   17943 cache.go:57] Caching tarball of preloaded images
	I0124 10:14:13.002718   17943 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 10:14:13.002737   17943 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0124 10:14:13.003790   17943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/config.json ...
	I0124 10:14:13.003924   17943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/config.json: {Name:mk7e80e331f86710f892b3c9988d72f8823fdb1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:13.059004   17943 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 10:14:13.059025   17943 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 10:14:13.059042   17943 cache.go:193] Successfully downloaded all kic artifacts
	I0124 10:14:13.059087   17943 start.go:364] acquiring machines lock for kubernetes-upgrade-582000: {Name:mkaacab342eef8d0343c4381ddebf23c29aac920 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 10:14:13.059237   17943 start.go:368] acquired machines lock for "kubernetes-upgrade-582000" in 138.601µs
	I0124 10:14:13.059264   17943 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-582000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 10:14:13.059346   17943 start.go:125] createHost starting for "" (driver="docker")
	I0124 10:14:13.081184   17943 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0124 10:14:13.081600   17943 start.go:159] libmachine.API.Create for "kubernetes-upgrade-582000" (driver="docker")
	I0124 10:14:13.081641   17943 client.go:168] LocalClient.Create starting
	I0124 10:14:13.081832   17943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem
	I0124 10:14:13.081913   17943 main.go:141] libmachine: Decoding PEM data...
	I0124 10:14:13.081944   17943 main.go:141] libmachine: Parsing certificate...
	I0124 10:14:13.082041   17943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem
	I0124 10:14:13.082099   17943 main.go:141] libmachine: Decoding PEM data...
	I0124 10:14:13.082117   17943 main.go:141] libmachine: Parsing certificate...
	I0124 10:14:13.082857   17943 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-582000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0124 10:14:13.137762   17943 cli_runner.go:211] docker network inspect kubernetes-upgrade-582000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0124 10:14:13.137863   17943 network_create.go:281] running [docker network inspect kubernetes-upgrade-582000] to gather additional debugging logs...
	I0124 10:14:13.137882   17943 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-582000
	W0124 10:14:13.190795   17943 cli_runner.go:211] docker network inspect kubernetes-upgrade-582000 returned with exit code 1
	I0124 10:14:13.190828   17943 network_create.go:284] error running [docker network inspect kubernetes-upgrade-582000]: docker network inspect kubernetes-upgrade-582000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-582000
	I0124 10:14:13.190840   17943 network_create.go:286] output of [docker network inspect kubernetes-upgrade-582000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-582000
	
	** /stderr **
	I0124 10:14:13.190918   17943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 10:14:13.245477   17943 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:14:13.245816   17943 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001388730}
	I0124 10:14:13.245829   17943 network_create.go:123] attempt to create docker network kubernetes-upgrade-582000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0124 10:14:13.245890   17943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 kubernetes-upgrade-582000
	W0124 10:14:13.299651   17943 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 kubernetes-upgrade-582000 returned with exit code 1
	W0124 10:14:13.299695   17943 network_create.go:148] failed to create docker network kubernetes-upgrade-582000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 kubernetes-upgrade-582000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0124 10:14:13.299715   17943 network_create.go:115] failed to create docker network kubernetes-upgrade-582000 192.168.58.0/24, will retry: subnet is taken
	I0124 10:14:13.301098   17943 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:14:13.302408   17943 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000eee9e0}
	I0124 10:14:13.302432   17943 network_create.go:123] attempt to create docker network kubernetes-upgrade-582000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0124 10:14:13.302513   17943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 kubernetes-upgrade-582000
	W0124 10:14:13.357336   17943 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 kubernetes-upgrade-582000 returned with exit code 1
	W0124 10:14:13.357384   17943 network_create.go:148] failed to create docker network kubernetes-upgrade-582000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 kubernetes-upgrade-582000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0124 10:14:13.357407   17943 network_create.go:115] failed to create docker network kubernetes-upgrade-582000 192.168.67.0/24, will retry: subnet is taken
	I0124 10:14:13.358722   17943 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:14:13.359033   17943 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001388e50}
	I0124 10:14:13.359046   17943 network_create.go:123] attempt to create docker network kubernetes-upgrade-582000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0124 10:14:13.359110   17943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 kubernetes-upgrade-582000
	I0124 10:14:13.446627   17943 network_create.go:107] docker network kubernetes-upgrade-582000 192.168.76.0/24 created
	I0124 10:14:13.446663   17943 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-582000" container
	I0124 10:14:13.446792   17943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0124 10:14:13.501363   17943 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-582000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 --label created_by.minikube.sigs.k8s.io=true
	I0124 10:14:13.557165   17943 oci.go:103] Successfully created a docker volume kubernetes-upgrade-582000
	I0124 10:14:13.557316   17943 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-582000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 --entrypoint /usr/bin/test -v kubernetes-upgrade-582000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0124 10:14:14.005699   17943 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-582000
	I0124 10:14:14.005730   17943 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:14:14.005753   17943 kic.go:190] Starting extracting preloaded images to volume ...
	I0124 10:14:14.005880   17943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-582000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0124 10:14:19.724443   17943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-582000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (5.718453379s)
	I0124 10:14:19.724462   17943 kic.go:199] duration metric: took 5.718666 seconds to extract preloaded images to volume
	I0124 10:14:19.724573   17943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0124 10:14:19.869259   17943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-582000 --name kubernetes-upgrade-582000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-582000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-582000 --network kubernetes-upgrade-582000 --ip 192.168.76.2 --volume kubernetes-upgrade-582000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0124 10:14:20.225432   17943 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-582000 --format={{.State.Running}}
	I0124 10:14:20.288591   17943 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-582000 --format={{.State.Status}}
	I0124 10:14:20.353856   17943 cli_runner.go:164] Run: docker exec kubernetes-upgrade-582000 stat /var/lib/dpkg/alternatives/iptables
	I0124 10:14:20.465510   17943 oci.go:144] the created container "kubernetes-upgrade-582000" has a running status.
	I0124 10:14:20.465545   17943 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa...
	I0124 10:14:20.692458   17943 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0124 10:14:20.794816   17943 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-582000 --format={{.State.Status}}
	I0124 10:14:20.962291   17943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0124 10:14:20.962311   17943 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-582000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0124 10:14:21.070170   17943 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-582000 --format={{.State.Status}}
	I0124 10:14:21.129192   17943 machine.go:88] provisioning docker machine ...
	I0124 10:14:21.129238   17943 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-582000"
	I0124 10:14:21.129331   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:21.187162   17943 main.go:141] libmachine: Using SSH client type: native
	I0124 10:14:21.187353   17943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53059 <nil> <nil>}
	I0124 10:14:21.187368   17943 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-582000 && echo "kubernetes-upgrade-582000" | sudo tee /etc/hostname
	I0124 10:14:21.332418   17943 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-582000
	
	I0124 10:14:21.332525   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:21.389458   17943 main.go:141] libmachine: Using SSH client type: native
	I0124 10:14:21.389632   17943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53059 <nil> <nil>}
	I0124 10:14:21.389645   17943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-582000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-582000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-582000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 10:14:21.525365   17943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:14:21.525384   17943 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 10:14:21.525400   17943 ubuntu.go:177] setting up certificates
	I0124 10:14:21.525413   17943 provision.go:83] configureAuth start
	I0124 10:14:21.525500   17943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-582000
	I0124 10:14:21.582875   17943 provision.go:138] copyHostCerts
	I0124 10:14:21.582967   17943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 10:14:21.582973   17943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 10:14:21.583080   17943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 10:14:21.583265   17943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 10:14:21.583271   17943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 10:14:21.583333   17943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 10:14:21.583498   17943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 10:14:21.583503   17943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 10:14:21.583562   17943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 10:14:21.583679   17943 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-582000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-582000]
	I0124 10:14:21.655501   17943 provision.go:172] copyRemoteCerts
	I0124 10:14:21.655554   17943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 10:14:21.655601   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:21.714062   17943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa Username:docker}
	I0124 10:14:21.807358   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 10:14:21.824896   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0124 10:14:21.841855   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0124 10:14:21.858780   17943 provision.go:86] duration metric: configureAuth took 333.351594ms
	I0124 10:14:21.858793   17943 ubuntu.go:193] setting minikube options for container-runtime
	I0124 10:14:21.858948   17943 config.go:180] Loaded profile config "kubernetes-upgrade-582000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0124 10:14:21.859013   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:21.916650   17943 main.go:141] libmachine: Using SSH client type: native
	I0124 10:14:21.916813   17943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53059 <nil> <nil>}
	I0124 10:14:21.916829   17943 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 10:14:22.051914   17943 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 10:14:22.051928   17943 ubuntu.go:71] root file system type: overlay
	I0124 10:14:22.052077   17943 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 10:14:22.052182   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:22.108990   17943 main.go:141] libmachine: Using SSH client type: native
	I0124 10:14:22.109138   17943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53059 <nil> <nil>}
	I0124 10:14:22.109197   17943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 10:14:22.250751   17943 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 10:14:22.250863   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:22.308636   17943 main.go:141] libmachine: Using SSH client type: native
	I0124 10:14:22.308792   17943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53059 <nil> <nil>}
	I0124 10:14:22.308805   17943 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 10:14:22.923676   17943 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:14:22.137778635 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0124 10:14:22.923699   17943 machine.go:91] provisioned docker machine in 1.794471966s
	I0124 10:14:22.923706   17943 client.go:171] LocalClient.Create took 9.841979437s
	I0124 10:14:22.923725   17943 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-582000" took 9.842047031s
	I0124 10:14:22.923733   17943 start.go:300] post-start starting for "kubernetes-upgrade-582000" (driver="docker")
	I0124 10:14:22.923738   17943 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 10:14:22.923812   17943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 10:14:22.923872   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:22.985457   17943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa Username:docker}
	I0124 10:14:23.081541   17943 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 10:14:23.085166   17943 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 10:14:23.085183   17943 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 10:14:23.085191   17943 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 10:14:23.085198   17943 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 10:14:23.085206   17943 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 10:14:23.085313   17943 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 10:14:23.085494   17943 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 10:14:23.085682   17943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 10:14:23.093019   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:14:23.110653   17943 start.go:303] post-start completed in 186.909224ms
	I0124 10:14:23.111187   17943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-582000
	I0124 10:14:23.168221   17943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/config.json ...
	I0124 10:14:23.168627   17943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 10:14:23.168690   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:23.226442   17943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa Username:docker}
	I0124 10:14:23.318040   17943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 10:14:23.322685   17943 start.go:128] duration metric: createHost completed in 10.263247494s
	I0124 10:14:23.322705   17943 start.go:83] releasing machines lock for "kubernetes-upgrade-582000", held for 10.263377361s
	I0124 10:14:23.322784   17943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-582000
	I0124 10:14:23.381048   17943 ssh_runner.go:195] Run: cat /version.json
	I0124 10:14:23.381049   17943 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0124 10:14:23.381129   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:23.381152   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:23.446477   17943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa Username:docker}
	I0124 10:14:23.446572   17943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa Username:docker}
	I0124 10:14:23.739667   17943 ssh_runner.go:195] Run: systemctl --version
	I0124 10:14:23.744768   17943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 10:14:23.749807   17943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 10:14:23.769975   17943 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 10:14:23.770057   17943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0124 10:14:23.784183   17943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0124 10:14:23.792053   17943 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0124 10:14:23.792066   17943 start.go:472] detecting cgroup driver to use...
	I0124 10:14:23.792080   17943 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:14:23.792171   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:14:23.805504   17943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0124 10:14:23.814237   17943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 10:14:23.822730   17943 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 10:14:23.822786   17943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 10:14:23.831393   17943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:14:23.840023   17943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 10:14:23.848266   17943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:14:23.856539   17943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 10:14:23.864454   17943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 10:14:23.872685   17943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 10:14:23.880152   17943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 10:14:23.887316   17943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:14:23.955755   17943 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 10:14:24.025912   17943 start.go:472] detecting cgroup driver to use...
	I0124 10:14:24.025932   17943 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:14:24.026073   17943 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 10:14:24.037259   17943 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 10:14:24.037328   17943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 10:14:24.048484   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:14:24.063631   17943 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 10:14:24.157641   17943 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 10:14:24.252602   17943 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 10:14:24.252621   17943 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 10:14:24.266270   17943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:14:24.364382   17943 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 10:14:24.572515   17943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:14:24.607258   17943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:14:24.680950   17943 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	I0124 10:14:24.681145   17943 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-582000 dig +short host.docker.internal
	I0124 10:14:24.800836   17943 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:14:24.800980   17943 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:14:24.805612   17943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:14:24.815629   17943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:14:24.874598   17943 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:14:24.874682   17943 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:14:24.900239   17943 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0124 10:14:24.900255   17943 docker.go:560] Images already preloaded, skipping extraction
	I0124 10:14:24.900333   17943 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:14:24.924938   17943 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0124 10:14:24.924951   17943 cache_images.go:84] Images are preloaded, skipping loading
	I0124 10:14:24.925050   17943 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 10:14:25.027900   17943 cni.go:84] Creating CNI manager for ""
	I0124 10:14:25.027917   17943 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 10:14:25.027938   17943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 10:14:25.027956   17943 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-582000 NodeName:kubernetes-upgrade-582000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 10:14:25.028078   17943 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-582000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-582000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 10:14:25.028169   17943 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-582000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-582000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 10:14:25.028235   17943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0124 10:14:25.036260   17943 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 10:14:25.036321   17943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 10:14:25.043784   17943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0124 10:14:25.056782   17943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 10:14:25.070248   17943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0124 10:14:25.083400   17943 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0124 10:14:25.087404   17943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:14:25.097444   17943 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000 for IP: 192.168.76.2
	I0124 10:14:25.097462   17943 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:25.097658   17943 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 10:14:25.097724   17943 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 10:14:25.097781   17943 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.key
	I0124 10:14:25.097795   17943 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.crt with IP's: []
	I0124 10:14:25.192785   17943 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.crt ...
	I0124 10:14:25.192801   17943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.crt: {Name:mk50495e55667cf2a58298d4be2c79787c1d9fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:25.193144   17943 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.key ...
	I0124 10:14:25.193152   17943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.key: {Name:mk75fef9e94fb474bc62972211fd879c0f84c437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:25.193347   17943 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key.31bdca25
	I0124 10:14:25.193366   17943 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0124 10:14:25.246955   17943 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.crt.31bdca25 ...
	I0124 10:14:25.246967   17943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.crt.31bdca25: {Name:mk5d09903d69796e98c61afe335af42dc50c7268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:25.247218   17943 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key.31bdca25 ...
	I0124 10:14:25.247226   17943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key.31bdca25: {Name:mk98ec765c9c87b859411961cc9e18af17cf05f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:25.247407   17943 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.crt
	I0124 10:14:25.247561   17943 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key
	I0124 10:14:25.247729   17943 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.key
	I0124 10:14:25.247743   17943 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.crt with IP's: []
	I0124 10:14:25.388598   17943 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.crt ...
	I0124 10:14:25.388610   17943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.crt: {Name:mk0e2d336f9073cc6445e4987b08ac3eb7704160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:25.388851   17943 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.key ...
	I0124 10:14:25.388859   17943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.key: {Name:mkdaeb881c68fac360583a81f43ace2d2ba8639f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:14:25.389225   17943 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 10:14:25.389270   17943 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 10:14:25.389305   17943 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 10:14:25.389352   17943 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 10:14:25.389387   17943 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 10:14:25.389421   17943 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 10:14:25.389506   17943 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:14:25.390048   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 10:14:25.409056   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0124 10:14:25.426530   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 10:14:25.443787   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0124 10:14:25.461173   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 10:14:25.478270   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 10:14:25.495746   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 10:14:25.513496   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 10:14:25.531917   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 10:14:25.550676   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 10:14:25.568087   17943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 10:14:25.585466   17943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 10:14:25.598740   17943 ssh_runner.go:195] Run: openssl version
	I0124 10:14:25.604340   17943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 10:14:25.612551   17943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:14:25.616564   17943 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:14:25.616611   17943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:14:25.622301   17943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 10:14:25.630515   17943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 10:14:25.638592   17943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 10:14:25.642742   17943 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 10:14:25.642796   17943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 10:14:25.648220   17943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 10:14:25.656339   17943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 10:14:25.664853   17943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 10:14:25.668961   17943 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 10:14:25.669009   17943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 10:14:25.674579   17943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 10:14:25.682960   17943 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-582000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:14:25.683064   17943 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:14:25.705817   17943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 10:14:25.713998   17943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:14:25.721650   17943 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:14:25.721710   17943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:14:25.729574   17943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:14:25.729599   17943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:14:25.779270   17943 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:14:25.779358   17943 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:14:26.082021   17943 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:14:26.082143   17943 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:14:26.082305   17943 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:14:26.307124   17943 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:14:26.308429   17943 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:14:26.314885   17943 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:14:26.381931   17943 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:14:26.424244   17943 out.go:204]   - Generating certificates and keys ...
	I0124 10:14:26.424417   17943 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:14:26.424541   17943 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:14:26.481745   17943 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0124 10:14:26.592478   17943 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0124 10:14:26.786171   17943 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0124 10:14:26.843444   17943 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0124 10:14:27.015484   17943 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0124 10:14:27.015629   17943 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-582000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0124 10:14:27.083337   17943 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0124 10:14:27.083549   17943 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-582000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0124 10:14:27.401842   17943 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0124 10:14:27.600746   17943 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0124 10:14:27.753093   17943 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0124 10:14:27.753174   17943 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:14:27.962420   17943 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:14:28.060789   17943 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:14:28.149240   17943 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:14:28.346678   17943 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:14:28.347517   17943 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:14:28.369220   17943 out.go:204]   - Booting up control plane ...
	I0124 10:14:28.369353   17943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:14:28.369471   17943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:14:28.369574   17943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:14:28.369693   17943 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:14:28.369926   17943 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:15:08.357923   17943 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:15:08.358835   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:15:08.359051   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:15:13.360364   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:15:13.360586   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:15:23.360860   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:15:23.361031   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:15:43.361770   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:15:43.361941   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:16:23.362775   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:16:23.362945   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:16:23.362959   17943 kubeadm.go:322] 
	I0124 10:16:23.363007   17943 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:16:23.363042   17943 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:16:23.363046   17943 kubeadm.go:322] 
	I0124 10:16:23.363075   17943 kubeadm.go:322] This error is likely caused by:
	I0124 10:16:23.363100   17943 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:16:23.363201   17943 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:16:23.363216   17943 kubeadm.go:322] 
	I0124 10:16:23.363307   17943 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:16:23.363339   17943 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:16:23.363367   17943 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:16:23.363372   17943 kubeadm.go:322] 
	I0124 10:16:23.363477   17943 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:16:23.363541   17943 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:16:23.363597   17943 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:16:23.363645   17943 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:16:23.363706   17943 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:16:23.363733   17943 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:16:23.366271   17943 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:16:23.366353   17943 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:16:23.366470   17943 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:16:23.366581   17943 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:16:23.366671   17943 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:16:23.366737   17943 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0124 10:16:23.366906   17943 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-582000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-582000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-582000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-582000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0124 10:16:23.366940   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0124 10:16:23.792996   17943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:16:23.804275   17943 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:16:23.804342   17943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:16:23.812377   17943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:16:23.812408   17943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:16:23.865606   17943 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:16:23.865664   17943 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:16:24.199956   17943 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:16:24.200066   17943 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:16:24.200180   17943 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:16:24.454133   17943 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:16:24.455850   17943 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:16:24.462650   17943 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:16:24.523257   17943 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:16:24.547913   17943 out.go:204]   - Generating certificates and keys ...
	I0124 10:16:24.548038   17943 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:16:24.548104   17943 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:16:24.548176   17943 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 10:16:24.548255   17943 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 10:16:24.548346   17943 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 10:16:24.548409   17943 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 10:16:24.548511   17943 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 10:16:24.548594   17943 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 10:16:24.548676   17943 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 10:16:24.548752   17943 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 10:16:24.548804   17943 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 10:16:24.548851   17943 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:16:24.596355   17943 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:16:24.708796   17943 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:16:24.864562   17943 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:16:24.952559   17943 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:16:24.953128   17943 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:16:24.975047   17943 out.go:204]   - Booting up control plane ...
	I0124 10:16:24.975193   17943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:16:24.975341   17943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:16:24.975437   17943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:16:24.975536   17943 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:16:24.975724   17943 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:17:04.967702   17943 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:17:04.968593   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:17:04.968785   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:17:09.969425   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:17:09.969589   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:17:19.970360   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:17:19.970515   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:17:39.972909   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:17:39.973136   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:18:19.974430   17943 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:18:19.974633   17943 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:18:19.974648   17943 kubeadm.go:322] 
	I0124 10:18:19.974690   17943 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:18:19.974723   17943 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:18:19.974729   17943 kubeadm.go:322] 
	I0124 10:18:19.974753   17943 kubeadm.go:322] This error is likely caused by:
	I0124 10:18:19.974779   17943 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:18:19.974849   17943 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:18:19.974865   17943 kubeadm.go:322] 
	I0124 10:18:19.975000   17943 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:18:19.975034   17943 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:18:19.975071   17943 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:18:19.975077   17943 kubeadm.go:322] 
	I0124 10:18:19.975166   17943 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:18:19.975251   17943 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:18:19.975408   17943 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:18:19.975467   17943 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:18:19.975620   17943 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:18:19.975649   17943 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:18:19.979231   17943 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:18:19.979332   17943 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:18:19.979449   17943 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:18:19.979531   17943 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:18:19.979600   17943 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:18:19.979682   17943 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0124 10:18:19.979708   17943 kubeadm.go:403] StartCluster complete in 3m54.29522569s
	I0124 10:18:19.979795   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:18:20.004506   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.004520   17943 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:18:20.004612   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:18:20.027600   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.027613   17943 logs.go:281] No container was found matching "etcd"
	I0124 10:18:20.027700   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:18:20.049949   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.049971   17943 logs.go:281] No container was found matching "coredns"
	I0124 10:18:20.050039   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:18:20.072991   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.073007   17943 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:18:20.073078   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:18:20.099092   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.099107   17943 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:18:20.099181   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:18:20.126135   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.126150   17943 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:18:20.126226   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:18:20.154860   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.154875   17943 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:18:20.154956   17943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:18:20.182100   17943 logs.go:279] 0 containers: []
	W0124 10:18:20.182114   17943 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:18:20.182126   17943 logs.go:124] Gathering logs for kubelet ...
	I0124 10:18:20.182137   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:18:20.227595   17943 logs.go:124] Gathering logs for dmesg ...
	I0124 10:18:20.227615   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:18:20.243673   17943 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:18:20.243689   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:18:20.302768   17943 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:18:20.302779   17943 logs.go:124] Gathering logs for Docker ...
	I0124 10:18:20.302786   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:18:20.320031   17943 logs.go:124] Gathering logs for container status ...
	I0124 10:18:20.320045   17943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:18:22.374366   17943 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054292884s)
	W0124 10:18:22.374500   17943 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0124 10:18:22.374518   17943 out.go:239] * 
	* 
	W0124 10:18:22.374694   17943 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:18:22.374711   17943 out.go:239] * 
	* 
	W0124 10:18:22.375494   17943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 10:18:22.437118   17943 out.go:177] 
	W0124 10:18:22.479058   17943 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:18:22.479136   17943 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0124 10:18:22.479194   17943 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0124 10:18:22.500130   17943 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-582000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-582000: (1.68313422s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-582000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-582000 status --format={{.Host}}: exit status 7 (157.153107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (55.585041993s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-582000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (647.916621ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-582000] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-582000
	    minikube start -p kubernetes-upgrade-582000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5820002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-582000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-582000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (27.875696453s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-01-24 10:19:48.603366 -0800 PST m=+3146.516519358
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-582000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-582000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3644824a30b2fa9122daecca6940c3f696912e6bd2803491dd81899523d72250",
	        "Created": "2023-01-24T18:14:19.814847608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:18:25.982898389Z",
	            "FinishedAt": "2023-01-24T18:18:23.114843057Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/3644824a30b2fa9122daecca6940c3f696912e6bd2803491dd81899523d72250/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3644824a30b2fa9122daecca6940c3f696912e6bd2803491dd81899523d72250/hostname",
	        "HostsPath": "/var/lib/docker/containers/3644824a30b2fa9122daecca6940c3f696912e6bd2803491dd81899523d72250/hosts",
	        "LogPath": "/var/lib/docker/containers/3644824a30b2fa9122daecca6940c3f696912e6bd2803491dd81899523d72250/3644824a30b2fa9122daecca6940c3f696912e6bd2803491dd81899523d72250-json.log",
	        "Name": "/kubernetes-upgrade-582000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-582000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-582000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/24be7768a17823b7ac0fd0e862e35b13523688f6c826e723ec75b48b5f31640b-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24be7768a17823b7ac0fd0e862e35b13523688f6c826e723ec75b48b5f31640b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24be7768a17823b7ac0fd0e862e35b13523688f6c826e723ec75b48b5f31640b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24be7768a17823b7ac0fd0e862e35b13523688f6c826e723ec75b48b5f31640b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-582000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-582000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-582000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-582000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-582000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cfee24f484d725e7f1938a4a67ee5e79735c09e314a7cfec321a209f1185bab",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53323"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53324"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53320"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53321"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53322"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5cfee24f484d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-582000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3644824a30b2",
	                        "kubernetes-upgrade-582000"
	                    ],
	                    "NetworkID": "503aa24fc7c6e5f132071d9b19d8b9c799db4e7429c248c9741c6048d317a55e",
	                    "EndpointID": "f36c588ef9034f38a0f26be99259821a08066f1e673f20af05ea89ed95cd7564",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-582000 -n kubernetes-upgrade-582000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-582000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-582000 logs -n 25: (4.14661475s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p pause-324000 --memory=2048  | pause-324000              | jenkins | v1.28.0 | 24 Jan 23 10:16 PST | 24 Jan 23 10:17 PST |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |         |         |                     |                     |
	| start   | -p pause-324000                | pause-324000              | jenkins | v1.28.0 | 24 Jan 23 10:17 PST | 24 Jan 23 10:18 PST |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| pause   | -p pause-324000                | pause-324000              | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:18 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| unpause | -p pause-324000                | pause-324000              | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:18 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| pause   | -p pause-324000                | pause-324000              | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:18 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| delete  | -p pause-324000                | pause-324000              | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:18 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| profile | list --output json             | minikube                  | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:18 PST |
	| delete  | -p pause-324000                | pause-324000              | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:18 PST |
	| start   | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:18 PST |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:19 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-582000   | kubernetes-upgrade-582000 | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:18 PST |
	| start   | -p kubernetes-upgrade-582000   | kubernetes-upgrade-582000 | jenkins | v1.28.0 | 24 Jan 23 10:18 PST | 24 Jan 23 10:19 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	| start   | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-391000 sudo    | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	| start   | -p kubernetes-upgrade-582000   | kubernetes-upgrade-582000 | jenkins | v1.28.0 | 24 Jan 23 10:19 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| profile | list --output=json             | minikube                  | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	| start   | -p kubernetes-upgrade-582000   | kubernetes-upgrade-582000 | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	| start   | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-391000 sudo    | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-391000         | NoKubernetes-391000       | jenkins | v1.28.0 | 24 Jan 23 10:19 PST | 24 Jan 23 10:19 PST |
	| start   | -p auto-129000 --memory=3072   | auto-129000               | jenkins | v1.28.0 | 24 Jan 23 10:19 PST |                     |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --wait-timeout=15m             |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 10:19:31
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 10:19:31.702478   20023 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:19:31.702749   20023 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:19:31.702755   20023 out.go:309] Setting ErrFile to fd 2...
	I0124 10:19:31.702759   20023 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:19:31.702859   20023 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:19:31.703388   20023 out.go:303] Setting JSON to false
	I0124 10:19:31.722073   20023 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4746,"bootTime":1674579625,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 10:19:31.722157   20023 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 10:19:31.744016   20023 out.go:177] * [auto-129000] minikube v1.28.0 on Darwin 13.1
	I0124 10:19:31.786849   20023 notify.go:220] Checking for updates...
	I0124 10:19:31.808562   20023 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 10:19:31.829356   20023 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:19:31.850936   20023 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 10:19:31.872904   20023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 10:19:31.894570   20023 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 10:19:31.915799   20023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 10:19:31.938489   20023 config.go:180] Loaded profile config "kubernetes-upgrade-582000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:19:31.938595   20023 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 10:19:32.000814   20023 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 10:19:32.000958   20023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:19:32.142747   20023 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:19:32.050412715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:19:32.186323   20023 out.go:177] * Using the docker driver based on user configuration
	I0124 10:19:32.207762   20023 start.go:296] selected driver: docker
	I0124 10:19:32.207796   20023 start.go:840] validating driver "docker" against <nil>
	I0124 10:19:32.207816   20023 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 10:19:32.211798   20023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:19:32.354793   20023 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:19:32.261974666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:19:32.354919   20023 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0124 10:19:32.355082   20023 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0124 10:19:32.376439   20023 out.go:177] * Using Docker Desktop driver with root privileges
	I0124 10:19:32.398370   20023 cni.go:84] Creating CNI manager for ""
	I0124 10:19:32.398409   20023 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:19:32.398458   20023 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0124 10:19:32.398497   20023 start_flags.go:319] config:
	{Name:auto-129000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:auto-129000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:19:32.441131   20023 out.go:177] * Starting control plane node auto-129000 in cluster auto-129000
	I0124 10:19:32.462354   20023 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 10:19:32.484455   20023 out.go:177] * Pulling base image ...
	I0124 10:19:32.527464   20023 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 10:19:32.527470   20023 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:19:32.527560   20023 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 10:19:32.527585   20023 cache.go:57] Caching tarball of preloaded images
	I0124 10:19:32.527816   20023 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 10:19:32.527831   20023 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0124 10:19:32.528874   20023 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/config.json ...
	I0124 10:19:32.529042   20023 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/config.json: {Name:mk4faa7d31b7cb61e58f17801d1553d260ec2c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:19:32.585004   20023 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 10:19:32.585033   20023 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 10:19:32.585068   20023 cache.go:193] Successfully downloaded all kic artifacts
	I0124 10:19:32.585111   20023 start.go:364] acquiring machines lock for auto-129000: {Name:mk8e5be430e27d048e4fb41944d30342b851e603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 10:19:32.585258   20023 start.go:368] acquired machines lock for "auto-129000" in 135.993µs
	I0124 10:19:32.585283   20023 start.go:93] Provisioning new machine with config: &{Name:auto-129000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:auto-129000 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 10:19:32.585342   20023 start.go:125] createHost starting for "" (driver="docker")
	I0124 10:19:32.628712   20023 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0124 10:19:32.629135   20023 start.go:159] libmachine.API.Create for "auto-129000" (driver="docker")
	I0124 10:19:32.629181   20023 client.go:168] LocalClient.Create starting
	I0124 10:19:32.629345   20023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem
	I0124 10:19:32.629452   20023 main.go:141] libmachine: Decoding PEM data...
	I0124 10:19:32.629484   20023 main.go:141] libmachine: Parsing certificate...
	I0124 10:19:32.629583   20023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem
	I0124 10:19:32.629643   20023 main.go:141] libmachine: Decoding PEM data...
	I0124 10:19:32.629659   20023 main.go:141] libmachine: Parsing certificate...
	I0124 10:19:32.630561   20023 cli_runner.go:164] Run: docker network inspect auto-129000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0124 10:19:32.685300   20023 cli_runner.go:211] docker network inspect auto-129000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0124 10:19:32.685396   20023 network_create.go:281] running [docker network inspect auto-129000] to gather additional debugging logs...
	I0124 10:19:32.685413   20023 cli_runner.go:164] Run: docker network inspect auto-129000
	W0124 10:19:32.740001   20023 cli_runner.go:211] docker network inspect auto-129000 returned with exit code 1
	I0124 10:19:32.740037   20023 network_create.go:284] error running [docker network inspect auto-129000]: docker network inspect auto-129000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-129000
	I0124 10:19:32.740049   20023 network_create.go:286] output of [docker network inspect auto-129000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-129000
	
	** /stderr **
	I0124 10:19:32.740124   20023 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 10:19:32.795980   20023 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:19:32.796297   20023 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ab8560}
	I0124 10:19:32.796308   20023 network_create.go:123] attempt to create docker network auto-129000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0124 10:19:32.796377   20023 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-129000 auto-129000
	W0124 10:19:32.853089   20023 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-129000 auto-129000 returned with exit code 1
	W0124 10:19:32.853123   20023 network_create.go:148] failed to create docker network auto-129000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-129000 auto-129000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0124 10:19:32.853146   20023 network_create.go:115] failed to create docker network auto-129000 192.168.58.0/24, will retry: subnet is taken
	I0124 10:19:32.854472   20023 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:19:32.854762   20023 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ab93c0}
	I0124 10:19:32.854773   20023 network_create.go:123] attempt to create docker network auto-129000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0124 10:19:32.854831   20023 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-129000 auto-129000
	I0124 10:19:32.944230   20023 network_create.go:107] docker network auto-129000 192.168.67.0/24 created
	I0124 10:19:32.944263   20023 kic.go:117] calculated static IP "192.168.67.2" for the "auto-129000" container
	I0124 10:19:32.944379   20023 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0124 10:19:33.000195   20023 cli_runner.go:164] Run: docker volume create auto-129000 --label name.minikube.sigs.k8s.io=auto-129000 --label created_by.minikube.sigs.k8s.io=true
	I0124 10:19:33.055751   20023 oci.go:103] Successfully created a docker volume auto-129000
	I0124 10:19:33.055855   20023 cli_runner.go:164] Run: docker run --rm --name auto-129000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-129000 --entrypoint /usr/bin/test -v auto-129000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0124 10:19:33.495014   20023 oci.go:107] Successfully prepared a docker volume auto-129000
	I0124 10:19:33.495063   20023 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:19:33.495079   20023 kic.go:190] Starting extracting preloaded images to volume ...
	I0124 10:19:33.495189   20023 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-129000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0124 10:19:37.606226   19842 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.043243318s)
	I0124 10:19:37.606310   19842 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:19:37.694949   19842 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 10:19:37.799209   19842 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:19:37.896897   19842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:19:37.983495   19842 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 10:19:38.021125   19842 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 10:19:38.021252   19842 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 10:19:38.029380   19842 start.go:540] Will wait 60s for crictl version
	I0124 10:19:38.029459   19842 ssh_runner.go:195] Run: which crictl
	I0124 10:19:38.037492   19842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 10:19:38.151691   19842 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 10:19:38.151794   19842 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:19:38.196985   19842 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:19:38.293128   19842 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 10:19:38.293233   19842 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-582000 dig +short host.docker.internal
	I0124 10:19:38.431426   19842 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:19:38.431603   19842 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:19:38.439757   19842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:19:38.519880   19842 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:19:38.520000   19842 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:19:38.556932   19842 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:19:38.556954   19842 docker.go:560] Images already preloaded, skipping extraction
	I0124 10:19:38.557040   19842 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:19:38.639490   19842 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:19:38.639517   19842 cache_images.go:84] Images are preloaded, skipping loading
	I0124 10:19:38.639651   19842 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 10:19:38.932645   19842 cni.go:84] Creating CNI manager for ""
	I0124 10:19:38.932666   19842 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:19:38.932697   19842 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 10:19:38.932731   19842 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-582000 NodeName:kubernetes-upgrade-582000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 10:19:38.932899   19842 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-582000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 10:19:38.932981   19842 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-582000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-582000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 10:19:38.933042   19842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0124 10:19:38.949882   19842 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 10:19:38.949985   19842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 10:19:38.972049   19842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0124 10:19:39.037378   19842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 10:19:39.059662   19842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0124 10:19:39.139247   19842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0124 10:19:39.145086   19842 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000 for IP: 192.168.76.2
	I0124 10:19:39.145123   19842 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:19:39.145342   19842 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 10:19:39.145425   19842 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 10:19:39.145544   19842 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.key
	I0124 10:19:39.145649   19842 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key.31bdca25
	I0124 10:19:39.145729   19842 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.key
	I0124 10:19:39.146060   19842 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 10:19:39.146132   19842 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 10:19:39.146148   19842 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 10:19:39.146205   19842 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 10:19:39.146251   19842 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 10:19:39.146296   19842 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 10:19:39.146412   19842 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:19:39.147336   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 10:19:39.181478   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0124 10:19:39.274528   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 10:19:39.357219   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0124 10:19:39.429475   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 10:19:39.459978   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 10:19:39.486045   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 10:19:39.543191   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 10:19:39.570980   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 10:19:39.629156   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 10:19:39.655113   19842 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 10:19:39.686851   19842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 10:19:39.736841   19842 ssh_runner.go:195] Run: openssl version
	I0124 10:19:39.745107   19842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 10:19:39.756710   19842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 10:19:39.765091   19842 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 10:19:39.765209   19842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 10:19:39.775263   19842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 10:19:39.785462   19842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 10:19:39.825867   19842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 10:19:39.833266   19842 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 10:19:39.833350   19842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 10:19:39.840729   19842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 10:19:39.857125   19842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 10:19:39.868016   19842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:19:39.873702   19842 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:19:39.873797   19842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:19:39.883722   19842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 10:19:39.925807   19842 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-582000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:19:39.925954   19842 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:19:39.970016   19842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 10:19:39.980199   19842 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0124 10:19:39.980218   19842 kubeadm.go:633] restartCluster start
	I0124 10:19:39.980290   19842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0124 10:19:39.992862   19842 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:39.992964   19842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:19:40.070601   19842 kubeconfig.go:92] found "kubernetes-upgrade-582000" server: "https://127.0.0.1:53322"
	I0124 10:19:40.071292   19842 kapi.go:59] client config for kubernetes-upgrade-582000: &rest.Config{Host:"https://127.0.0.1:53322", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 10:19:40.071963   19842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0124 10:19:40.083306   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:40.083386   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:40.096024   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:40.597046   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:40.597110   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:40.608394   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:40.924223   20023 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-129000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (7.428942024s)
	I0124 10:19:40.924244   20023 kic.go:199] duration metric: took 7.429118 seconds to extract preloaded images to volume
	I0124 10:19:40.924351   20023 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0124 10:19:41.073003   20023 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-129000 --name auto-129000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-129000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-129000 --network auto-129000 --ip 192.168.67.2 --volume auto-129000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0124 10:19:41.491723   20023 cli_runner.go:164] Run: docker container inspect auto-129000 --format={{.State.Running}}
	I0124 10:19:41.557176   20023 cli_runner.go:164] Run: docker container inspect auto-129000 --format={{.State.Status}}
	I0124 10:19:41.624462   20023 cli_runner.go:164] Run: docker exec auto-129000 stat /var/lib/dpkg/alternatives/iptables
	I0124 10:19:41.096711   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:41.096858   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:41.108781   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:41.596713   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:41.596789   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:41.608976   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:42.096707   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:42.096765   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:42.107528   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:42.596725   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:42.596830   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:42.606912   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:43.097177   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:43.097258   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:43.107203   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:43.596711   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:43.596828   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:19:43.635001   19842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:44.096731   19842 api_server.go:165] Checking apiserver status ...
	I0124 10:19:44.096809   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:19:44.108382   19842 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4947/cgroup
	W0124 10:19:44.118069   19842 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4947/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:19:44.118130   19842 ssh_runner.go:195] Run: ls
	I0124 10:19:44.122860   19842 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53322/healthz ...
	I0124 10:19:45.586885   19842 api_server.go:278] https://127.0.0.1:53322/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0124 10:19:45.586912   19842 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:53322/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0124 10:19:41.751244   20023 oci.go:144] the created container "auto-129000" has a running status.
	I0124 10:19:41.751278   20023 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/auto-129000/id_rsa...
	I0124 10:19:41.809509   20023 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/auto-129000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0124 10:19:41.917935   20023 cli_runner.go:164] Run: docker container inspect auto-129000 --format={{.State.Status}}
	I0124 10:19:41.980333   20023 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0124 10:19:41.980354   20023 kic_runner.go:114] Args: [docker exec --privileged auto-129000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0124 10:19:42.084699   20023 cli_runner.go:164] Run: docker container inspect auto-129000 --format={{.State.Status}}
	I0124 10:19:42.143061   20023 machine.go:88] provisioning docker machine ...
	I0124 10:19:42.143102   20023 ubuntu.go:169] provisioning hostname "auto-129000"
	I0124 10:19:42.143200   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:42.201947   20023 main.go:141] libmachine: Using SSH client type: native
	I0124 10:19:42.202139   20023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53565 <nil> <nil>}
	I0124 10:19:42.202154   20023 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-129000 && echo "auto-129000" | sudo tee /etc/hostname
	I0124 10:19:42.345561   20023 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-129000
	
	I0124 10:19:42.345655   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:42.406086   20023 main.go:141] libmachine: Using SSH client type: native
	I0124 10:19:42.406256   20023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53565 <nil> <nil>}
	I0124 10:19:42.406269   20023 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-129000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-129000/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-129000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 10:19:42.541403   20023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:19:42.541423   20023 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 10:19:42.541440   20023 ubuntu.go:177] setting up certificates
	I0124 10:19:42.541449   20023 provision.go:83] configureAuth start
	I0124 10:19:42.541536   20023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129000
	I0124 10:19:42.601191   20023 provision.go:138] copyHostCerts
	I0124 10:19:42.601287   20023 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 10:19:42.601294   20023 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 10:19:42.601418   20023 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 10:19:42.601625   20023 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 10:19:42.601631   20023 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 10:19:42.601695   20023 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 10:19:42.601859   20023 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 10:19:42.601865   20023 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 10:19:42.601928   20023 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 10:19:42.602052   20023 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.auto-129000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube auto-129000]
	I0124 10:19:42.744983   20023 provision.go:172] copyRemoteCerts
	I0124 10:19:42.745042   20023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 10:19:42.745094   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:42.806112   20023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53565 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/auto-129000/id_rsa Username:docker}
	I0124 10:19:42.901455   20023 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0124 10:19:42.919191   20023 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0124 10:19:42.936902   20023 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 10:19:42.954230   20023 provision.go:86] duration metric: configureAuth took 412.765556ms
	I0124 10:19:42.954244   20023 ubuntu.go:193] setting minikube options for container-runtime
	I0124 10:19:42.954392   20023 config.go:180] Loaded profile config "auto-129000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:19:42.954465   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:43.013490   20023 main.go:141] libmachine: Using SSH client type: native
	I0124 10:19:43.013632   20023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53565 <nil> <nil>}
	I0124 10:19:43.013647   20023 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 10:19:43.147147   20023 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 10:19:43.147161   20023 ubuntu.go:71] root file system type: overlay
	I0124 10:19:43.147314   20023 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 10:19:43.147407   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:43.206531   20023 main.go:141] libmachine: Using SSH client type: native
	I0124 10:19:43.206700   20023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53565 <nil> <nil>}
	I0124 10:19:43.206748   20023 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 10:19:43.350430   20023 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 10:19:43.350557   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:43.410981   20023 main.go:141] libmachine: Using SSH client type: native
	I0124 10:19:43.411150   20023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 53565 <nil> <nil>}
	I0124 10:19:43.411191   20023 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 10:19:44.087315   20023 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:19:43.348280121 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0124 10:19:44.087344   20023 machine.go:91] provisioned docker machine in 1.944249523s
	I0124 10:19:44.087351   20023 client.go:171] LocalClient.Create took 11.458087618s
	I0124 10:19:44.087373   20023 start.go:167] duration metric: libmachine.API.Create for "auto-129000" took 11.458165437s
	I0124 10:19:44.087384   20023 start.go:300] post-start starting for "auto-129000" (driver="docker")
	I0124 10:19:44.087389   20023 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 10:19:44.087482   20023 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 10:19:44.087557   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:44.152652   20023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53565 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/auto-129000/id_rsa Username:docker}
	I0124 10:19:44.248614   20023 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 10:19:44.252846   20023 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 10:19:44.252867   20023 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 10:19:44.252875   20023 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 10:19:44.252889   20023 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 10:19:44.252903   20023 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 10:19:44.253024   20023 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 10:19:44.253222   20023 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 10:19:44.253444   20023 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 10:19:44.262131   20023 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:19:44.282613   20023 start.go:303] post-start completed in 195.218226ms
	I0124 10:19:44.283193   20023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129000
	I0124 10:19:44.346326   20023 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/config.json ...
	I0124 10:19:44.346777   20023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 10:19:44.346868   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:44.420227   20023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53565 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/auto-129000/id_rsa Username:docker}
	I0124 10:19:44.511026   20023 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 10:19:44.515899   20023 start.go:128] duration metric: createHost completed in 11.930469825s
	I0124 10:19:44.515922   20023 start.go:83] releasing machines lock for "auto-129000", held for 11.930575282s
	I0124 10:19:44.516028   20023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-129000
	I0124 10:19:44.578388   20023 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0124 10:19:44.578390   20023 ssh_runner.go:195] Run: cat /version.json
	I0124 10:19:44.578481   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:44.578483   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:44.651902   20023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53565 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/auto-129000/id_rsa Username:docker}
	I0124 10:19:44.651903   20023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53565 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/auto-129000/id_rsa Username:docker}
	I0124 10:19:44.808199   20023 ssh_runner.go:195] Run: systemctl --version
	I0124 10:19:44.812928   20023 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 10:19:44.818130   20023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 10:19:44.842020   20023 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 10:19:44.842148   20023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0124 10:19:44.850860   20023 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0124 10:19:44.865337   20023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0124 10:19:44.881695   20023 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0124 10:19:44.881712   20023 start.go:472] detecting cgroup driver to use...
	I0124 10:19:44.881728   20023 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:19:44.881871   20023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:19:44.897194   20023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0124 10:19:44.906627   20023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 10:19:44.916888   20023 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 10:19:44.916963   20023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 10:19:44.928215   20023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:19:44.938432   20023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 10:19:44.948299   20023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:19:44.957677   20023 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 10:19:44.966224   20023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 10:19:44.976354   20023 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 10:19:44.992611   20023 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 10:19:45.002854   20023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:19:45.078148   20023 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 10:19:45.150787   20023 start.go:472] detecting cgroup driver to use...
	I0124 10:19:45.150812   20023 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:19:45.150901   20023 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 10:19:45.163864   20023 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 10:19:45.163947   20023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 10:19:45.175897   20023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:19:45.192079   20023 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 10:19:45.281504   20023 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 10:19:45.367531   20023 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 10:19:45.367550   20023 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 10:19:45.382504   20023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:19:45.476469   20023 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 10:19:45.731815   20023 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:19:45.802516   20023 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 10:19:45.877655   20023 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:19:45.950574   20023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:19:46.020815   20023 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 10:19:46.033271   20023 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 10:19:46.033364   20023 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 10:19:46.037493   20023 start.go:540] Will wait 60s for crictl version
	I0124 10:19:46.037540   20023 ssh_runner.go:195] Run: which crictl
	I0124 10:19:46.041300   20023 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 10:19:46.162936   20023 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 10:19:46.163020   20023 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:19:46.193460   20023 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:19:46.266069   20023 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 10:19:46.266278   20023 cli_runner.go:164] Run: docker exec -t auto-129000 dig +short host.docker.internal
	I0124 10:19:46.380397   20023 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:19:46.380507   20023 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:19:46.385007   20023 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:19:46.395480   20023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-129000
	I0124 10:19:46.457108   20023 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:19:46.457185   20023 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:19:46.480641   20023 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/coredns/coredns:v1.9.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:19:46.480654   20023 docker.go:636] registry.k8s.io/pause:3.9 wasn't preloaded
	I0124 10:19:46.480717   20023 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0124 10:19:46.488880   20023 ssh_runner.go:195] Run: which lz4
	I0124 10:19:46.492821   20023 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0124 10:19:46.496543   20023 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0124 10:19:46.496571   20023 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (441986565 bytes)
	I0124 10:19:45.850811   19842 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53322/healthz ...
	I0124 10:19:45.856689   19842 api_server.go:278] https://127.0.0.1:53322/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:19:45.856707   19842 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:53322/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:19:46.238251   19842 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53322/healthz ...
	I0124 10:19:46.244800   19842 api_server.go:278] https://127.0.0.1:53322/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:19:46.244818   19842 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:53322/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:19:46.667708   19842 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53322/healthz ...
	I0124 10:19:46.675916   19842 api_server.go:278] https://127.0.0.1:53322/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:19:46.675954   19842 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:53322/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:19:47.149200   19842 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53322/healthz ...
	I0124 10:19:47.155020   19842 api_server.go:278] https://127.0.0.1:53322/healthz returned 200:
	ok
	I0124 10:19:47.175138   19842 system_pods.go:86] 7 kube-system pods found
	I0124 10:19:47.175161   19842 system_pods.go:89] "coredns-787d4945fb-qdr4z" [889a4379-266c-4e1a-88ec-21c05e4c43c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0124 10:19:47.175171   19842 system_pods.go:89] "etcd-kubernetes-upgrade-582000" [9c01c032-98f2-45a5-9d61-55a9f98eef08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0124 10:19:47.175178   19842 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-582000" [420ce232-8531-4bd8-bd95-29c70bb70781] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0124 10:19:47.175208   19842 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-582000" [8417bad1-484b-41dd-b5a0-cb7f06a4aafe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0124 10:19:47.175213   19842 system_pods.go:89] "kube-proxy-dt6r6" [53f94e24-a1fe-4ecb-8f59-ff85e8c270a2] Running
	I0124 10:19:47.175219   19842 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-582000" [709ba6d6-178c-492b-b568-fe121d631f48] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0124 10:19:47.175223   19842 system_pods.go:89] "storage-provisioner" [868827b1-6c6f-4db3-b432-46e134abe72e] Running
	I0124 10:19:47.177729   19842 api_server.go:140] control plane version: v1.26.1
	I0124 10:19:47.177768   19842 kubeadm.go:627] The running cluster does not require reconfiguration: 127.0.0.1
	I0124 10:19:47.177793   19842 kubeadm.go:681] Taking a shortcut, as the cluster seems to be properly configured
	I0124 10:19:47.177807   19842 kubeadm.go:637] restartCluster took 7.197536171s
	I0124 10:19:47.177818   19842 kubeadm.go:403] StartCluster complete in 7.251970598s
	I0124 10:19:47.177835   19842 settings.go:142] acquiring lock: {Name:mkeea169922107d4bc5deea23d2d200e61271e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:19:47.177967   19842 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:19:47.178519   19842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/kubeconfig: {Name:mk581b13c705409309a542f9aac4783c330d27c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:19:47.178867   19842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0124 10:19:47.178871   19842 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0124 10:19:47.178958   19842 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-582000"
	I0124 10:19:47.178959   19842 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-582000"
	I0124 10:19:47.178984   19842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-582000"
	I0124 10:19:47.179023   19842 config.go:180] Loaded profile config "kubernetes-upgrade-582000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:19:47.179029   19842 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-582000"
	W0124 10:19:47.179079   19842 addons.go:236] addon storage-provisioner should already be in state true
	I0124 10:19:47.179149   19842 host.go:66] Checking if "kubernetes-upgrade-582000" exists ...
	I0124 10:19:47.179415   19842 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-582000 --format={{.State.Status}}
	I0124 10:19:47.180584   19842 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-582000 --format={{.State.Status}}
	I0124 10:19:47.180448   19842 kapi.go:59] client config for kubernetes-upgrade-582000: &rest.Config{Host:"https://127.0.0.1:53322", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 10:19:47.187929   19842 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-582000" context rescaled to 1 replicas
	I0124 10:19:47.187969   19842 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 10:19:47.209575   19842 out.go:177] * Verifying Kubernetes components...
	I0124 10:19:47.251954   19842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:19:47.272249   19842 kapi.go:59] client config for kubernetes-upgrade-582000: &rest.Config{Host:"https://127.0.0.1:53322", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubernetes-upgrade-582000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0124 10:19:47.292111   19842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 10:19:47.313312   19842 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 10:19:47.313334   19842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0124 10:19:47.313506   19842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:19:47.327127   19842 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-582000"
	W0124 10:19:47.327156   19842 addons.go:236] addon default-storageclass should already be in state true
	I0124 10:19:47.327184   19842 host.go:66] Checking if "kubernetes-upgrade-582000" exists ...
	I0124 10:19:47.327817   19842 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-582000 --format={{.State.Status}}
	I0124 10:19:47.346135   19842 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0124 10:19:47.346228   19842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:19:47.401788   19842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53323 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa Username:docker}
	I0124 10:19:47.419754   19842 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0124 10:19:47.419769   19842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0124 10:19:47.419885   19842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-582000
	I0124 10:19:47.432154   19842 api_server.go:51] waiting for apiserver process to appear ...
	I0124 10:19:47.432238   19842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:19:47.447346   19842 api_server.go:71] duration metric: took 259.347651ms to wait for apiserver process to appear ...
	I0124 10:19:47.447364   19842 api_server.go:87] waiting for apiserver healthz status ...
	I0124 10:19:47.447374   19842 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53322/healthz ...
	I0124 10:19:47.456695   19842 api_server.go:278] https://127.0.0.1:53322/healthz returned 200:
	ok
	I0124 10:19:47.459224   19842 api_server.go:140] control plane version: v1.26.1
	I0124 10:19:47.459238   19842 api_server.go:130] duration metric: took 11.86866ms to wait for apiserver health ...
	I0124 10:19:47.459245   19842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0124 10:19:47.466038   19842 system_pods.go:59] 7 kube-system pods found
	I0124 10:19:47.466055   19842 system_pods.go:61] "coredns-787d4945fb-qdr4z" [889a4379-266c-4e1a-88ec-21c05e4c43c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0124 10:19:47.466061   19842 system_pods.go:61] "etcd-kubernetes-upgrade-582000" [9c01c032-98f2-45a5-9d61-55a9f98eef08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0124 10:19:47.466066   19842 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-582000" [420ce232-8531-4bd8-bd95-29c70bb70781] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0124 10:19:47.466075   19842 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-582000" [8417bad1-484b-41dd-b5a0-cb7f06a4aafe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0124 10:19:47.466080   19842 system_pods.go:61] "kube-proxy-dt6r6" [53f94e24-a1fe-4ecb-8f59-ff85e8c270a2] Running
	I0124 10:19:47.466085   19842 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-582000" [709ba6d6-178c-492b-b568-fe121d631f48] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0124 10:19:47.466092   19842 system_pods.go:61] "storage-provisioner" [868827b1-6c6f-4db3-b432-46e134abe72e] Running
	I0124 10:19:47.466096   19842 system_pods.go:74] duration metric: took 6.847416ms to wait for pod list to return data ...
	I0124 10:19:47.466102   19842 kubeadm.go:578] duration metric: took 278.108948ms to wait for : map[apiserver:true system_pods:true] ...
	I0124 10:19:47.466114   19842 node_conditions.go:102] verifying NodePressure condition ...
	I0124 10:19:47.470909   19842 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0124 10:19:47.470927   19842 node_conditions.go:123] node cpu capacity is 6
	I0124 10:19:47.470941   19842 node_conditions.go:105] duration metric: took 4.822942ms to run NodePressure ...
	I0124 10:19:47.470949   19842 start.go:226] waiting for startup goroutines ...
	I0124 10:19:47.496340   19842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53323 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/kubernetes-upgrade-582000/id_rsa Username:docker}
	I0124 10:19:47.516985   19842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 10:19:47.609566   19842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0124 10:19:48.417771   19842 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0124 10:19:48.459801   19842 addons.go:488] enableAddons completed in 1.280941131s
	I0124 10:19:48.460386   19842 ssh_runner.go:195] Run: rm -f paused
	I0124 10:19:48.509890   19842 start.go:538] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0124 10:19:48.530921   19842 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-582000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-01-24 18:18:26 UTC, end at Tue 2023-01-24 18:19:50 UTC. --
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3379]: time="2023-01-24T18:19:37.250820577Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3379]: time="2023-01-24T18:19:37.258455409Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3379]: time="2023-01-24T18:19:37.260024609Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3379]: time="2023-01-24T18:19:37.260430161Z" level=info msg="Daemon shutdown complete"
	Jan 24 18:19:37 kubernetes-upgrade-582000 systemd[1]: docker.service: Succeeded.
	Jan 24 18:19:37 kubernetes-upgrade-582000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 24 18:19:37 kubernetes-upgrade-582000 systemd[1]: Starting Docker Application Container Engine...
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.325382906Z" level=info msg="Starting up"
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.326963046Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.327005449Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.327026484Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.327035379Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.328505189Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.328521838Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.328533670Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.328541432Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.361084074Z" level=info msg="Loading containers: start."
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.483406181Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.532651751Z" level=info msg="Loading containers: done."
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.576088697Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.576175817Z" level=info msg="Daemon has completed initialization"
	Jan 24 18:19:37 kubernetes-upgrade-582000 systemd[1]: Started Docker Application Container Engine.
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.610842919Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:19:37 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:37.613480848Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 24 18:19:48 kubernetes-upgrade-582000 dockerd[3691]: time="2023-01-24T18:19:48.945390740Z" level=info msg="ignoring event" container=132b524e926045f4f93a206aa0dbd1564d4e4d25cb374470b31715d776fc2afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	132b524e92604       6e38f40d628db       7 seconds ago       Exited              storage-provisioner       1                   a0fd0ed0db292
	c88bc634635ed       46a6bb3c77ce0       7 seconds ago       Running             kube-proxy                1                   70a1f6709c32d
	c3b605e1cb6d4       deb04688c4a35       7 seconds ago       Running             kube-apiserver            1                   3e52bfb79a050
	3802a8ab29a4f       5185b96f0becf       12 seconds ago      Running             coredns                   1                   7efba5f57543c
	0c74b13135e1d       655493523f607       12 seconds ago      Running             kube-scheduler            1                   922ae5f229029
	016547c3a4cc0       e9c08e11b07f6       12 seconds ago      Running             kube-controller-manager   1                   fc48616935db4
	aee04a4bb75d2       fce326961ae2d       12 seconds ago      Running             etcd                      1                   74060f7e678c2
	7638e06b858fd       5185b96f0becf       25 seconds ago      Exited              coredns                   0                   ff8a2a8239c58
	277186b07ab8f       46a6bb3c77ce0       25 seconds ago      Created             kube-proxy                0                   65abcd8814e83
	f6401a9dc568b       e9c08e11b07f6       44 seconds ago      Exited              kube-controller-manager   0                   f7ec2090102c9
	9ec122e36dce6       655493523f607       45 seconds ago      Exited              kube-scheduler            0                   9df5a1c252963
	aad97b48d19ba       fce326961ae2d       45 seconds ago      Exited              etcd                      0                   e19aa0428916d
	018678aebe0eb       deb04688c4a35       45 seconds ago      Exited              kube-apiserver            0                   3e02680e3326f
	
	* 
	* ==> coredns [3802a8ab29a4] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:47640 - 59384 "HINFO IN 5273026046582944486.1133835516526740963. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.015019767s
	[INFO] 127.0.0.1:50869 - 59837 "HINFO IN 5273026046582944486.1133835516526740963. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019088267s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [7638e06b858f] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-582000
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-582000
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Jan 2023 18:19:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-582000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Jan 2023 18:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Jan 2023 18:19:49 +0000   Tue, 24 Jan 2023 18:19:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Jan 2023 18:19:49 +0000   Tue, 24 Jan 2023 18:19:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Jan 2023 18:19:49 +0000   Tue, 24 Jan 2023 18:19:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Jan 2023 18:19:49 +0000   Tue, 24 Jan 2023 18:19:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-582000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 11af74b3a18d4d7295d17813eccf6dd7
	  System UUID:                11af74b3a18d4d7295d17813eccf6dd7
	  Boot ID:                    fa25f4a4-aa84-45e2-be97-79af4d2e0882
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.22
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-qdr4z                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     27s
	  kube-system                 etcd-kubernetes-upgrade-582000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         34s
	  kube-system                 kube-apiserver-kubernetes-upgrade-582000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-582000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-proxy-dt6r6                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-kubernetes-upgrade-582000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (12%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node kubernetes-upgrade-582000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node kubernetes-upgrade-582000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x7 over 55s)  kubelet          Node kubernetes-upgrade-582000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node kubernetes-upgrade-582000 event: Registered Node kubernetes-upgrade-582000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000075] FS-Cache: O-key=[8] 'dd49d50500000000'
	[  +0.000041] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000045] FS-Cache: N-cookie d=0000000043c2c21c{9p.inode} n=00000000121b7180
	[  +0.000053] FS-Cache: N-key=[8] 'dd49d50500000000'
	[  +0.002893] FS-Cache: Duplicate cookie detected
	[  +0.000089] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000089] FS-Cache: O-cookie d=0000000043c2c21c{9p.inode} n=00000000bf0b291a
	[  +0.000108] FS-Cache: O-key=[8] 'dd49d50500000000'
	[  +0.000060] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000141] FS-Cache: N-cookie d=0000000043c2c21c{9p.inode} n=000000007e30cefa
	[  +0.000096] FS-Cache: N-key=[8] 'dd49d50500000000'
	[  +2.986484] FS-Cache: Duplicate cookie detected
	[  +0.000037] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000053] FS-Cache: O-cookie d=0000000043c2c21c{9p.inode} n=00000000842a6415
	[  +0.000079] FS-Cache: O-key=[8] 'dc49d50500000000'
	[  +0.000051] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000055] FS-Cache: N-cookie d=0000000043c2c21c{9p.inode} n=000000007e30cefa
	[  +0.000067] FS-Cache: N-key=[8] 'dc49d50500000000'
	[  +0.414661] FS-Cache: Duplicate cookie detected
	[  +0.000053] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000049] FS-Cache: O-cookie d=0000000043c2c21c{9p.inode} n=00000000128976e4
	[  +0.000056] FS-Cache: O-key=[8] 'e749d50500000000'
	[  +0.000050] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000050] FS-Cache: N-cookie d=0000000043c2c21c{9p.inode} n=000000007861d945
	[  +0.000056] FS-Cache: N-key=[8] 'e749d50500000000'
	
	* 
	* ==> etcd [aad97b48d19b] <==
	* {"level":"info","ts":"2023-01-24T18:19:06.363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T18:19:06.363Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T18:19:06.366Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-24T18:19:06.366Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-24T18:19:19.838Z","caller":"traceutil/trace.go:171","msg":"trace[1167292506] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"113.147716ms","start":"2023-01-24T18:19:19.725Z","end":"2023-01-24T18:19:19.838Z","steps":["trace[1167292506] 'process raft request'  (duration: 38.931976ms)","trace[1167292506] 'compare'  (duration: 74.087986ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-24T18:19:20.437Z","caller":"traceutil/trace.go:171","msg":"trace[453139896] linearizableReadLoop","detail":"{readStateIndex:274; appliedIndex:273; }","duration":"111.112968ms","start":"2023-01-24T18:19:20.326Z","end":"2023-01-24T18:19:20.437Z","steps":["trace[453139896] 'read index received'  (duration: 15.15272ms)","trace[453139896] 'applied index is now lower than readState.Index'  (duration: 95.959903ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-24T18:19:20.437Z","caller":"traceutil/trace.go:171","msg":"trace[337762385] transaction","detail":"{read_only:false; response_revision:269; number_of_response:1; }","duration":"187.734553ms","start":"2023-01-24T18:19:20.250Z","end":"2023-01-24T18:19:20.437Z","steps":["trace[337762385] 'process raft request'  (duration: 91.548458ms)","trace[337762385] 'compare'  (duration: 95.928695ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-24T18:19:20.437Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.235542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2023-01-24T18:19:20.438Z","caller":"traceutil/trace.go:171","msg":"trace[1777295960] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:269; }","duration":"111.457819ms","start":"2023-01-24T18:19:20.326Z","end":"2023-01-24T18:19:20.438Z","steps":["trace[1777295960] 'agreement among raft nodes before linearized reading'  (duration: 111.203496ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-24T18:19:23.916Z","caller":"traceutil/trace.go:171","msg":"trace[1109907262] linearizableReadLoop","detail":"{readStateIndex:305; appliedIndex:302; }","duration":"107.11739ms","start":"2023-01-24T18:19:23.809Z","end":"2023-01-24T18:19:23.916Z","steps":["trace[1109907262] 'read index received'  (duration: 42.117626ms)","trace[1109907262] 'applied index is now lower than readState.Index'  (duration: 64.999457ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-24T18:19:23.917Z","caller":"traceutil/trace.go:171","msg":"trace[366649347] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"120.025056ms","start":"2023-01-24T18:19:23.796Z","end":"2023-01-24T18:19:23.917Z","steps":["trace[366649347] 'process raft request'  (duration: 119.858408ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-24T18:19:23.916Z","caller":"traceutil/trace.go:171","msg":"trace[346078014] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"122.223478ms","start":"2023-01-24T18:19:23.794Z","end":"2023-01-24T18:19:23.916Z","steps":["trace[346078014] 'process raft request'  (duration: 122.096815ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-24T18:19:23.916Z","caller":"traceutil/trace.go:171","msg":"trace[584198933] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"128.48621ms","start":"2023-01-24T18:19:23.788Z","end":"2023-01-24T18:19:23.916Z","steps":["trace[584198933] 'process raft request'  (duration: 63.488913ms)","trace[584198933] 'compare'  (duration: 64.661198ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-24T18:19:23.917Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.458757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2023-01-24T18:19:23.917Z","caller":"traceutil/trace.go:171","msg":"trace[181002085] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:299; }","duration":"107.482834ms","start":"2023-01-24T18:19:23.809Z","end":"2023-01-24T18:19:23.917Z","steps":["trace[181002085] 'agreement among raft nodes before linearized reading'  (duration: 107.444907ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-24T18:19:24.683Z","caller":"traceutil/trace.go:171","msg":"trace[972019856] linearizableReadLoop","detail":"{readStateIndex:341; appliedIndex:340; }","duration":"118.972964ms","start":"2023-01-24T18:19:24.564Z","end":"2023-01-24T18:19:24.683Z","steps":["trace[972019856] 'read index received'  (duration: 118.802982ms)","trace[972019856] 'applied index is now lower than readState.Index'  (duration: 169.299µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-24T18:19:24.683Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.223261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2023-01-24T18:19:24.683Z","caller":"traceutil/trace.go:171","msg":"trace[253549005] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:335; }","duration":"119.312911ms","start":"2023-01-24T18:19:24.564Z","end":"2023-01-24T18:19:24.683Z","steps":["trace[253549005] 'agreement among raft nodes before linearized reading'  (duration: 119.150071ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-24T18:19:24.683Z","caller":"traceutil/trace.go:171","msg":"trace[222463319] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"135.490426ms","start":"2023-01-24T18:19:24.548Z","end":"2023-01-24T18:19:24.683Z","steps":["trace[222463319] 'process raft request'  (duration: 135.16043ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-24T18:19:24.723Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-24T18:19:24.724Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-582000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-01-24T18:19:24.735Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-01-24T18:19:24.739Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-24T18:19:24.740Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-24T18:19:24.740Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-582000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [aee04a4bb75d] <==
	* {"level":"info","ts":"2023-01-24T18:19:39.072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-01-24T18:19:39.072Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-01-24T18:19:39.072Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T18:19:39.073Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-24T18:19:39.122Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-24T18:19:39.123Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-24T18:19:39.123Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-24T18:19:39.123Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-24T18:19:39.123Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-24T18:19:40.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-24T18:19:40.452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-24T18:19:40.452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-01-24T18:19:40.452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-01-24T18:19:40.452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-24T18:19:40.452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-01-24T18:19:40.452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-24T18:19:40.454Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-582000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-24T18:19:40.454Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-24T18:19:40.454Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-24T18:19:40.455Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-24T18:19:40.455Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-24T18:19:40.455Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-24T18:19:40.456Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-24T18:19:49.573Z","caller":"traceutil/trace.go:171","msg":"trace[1735472471] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"113.843133ms","start":"2023-01-24T18:19:49.459Z","end":"2023-01-24T18:19:49.573Z","steps":["trace[1735472471] 'process raft request'  (duration: 34.575824ms)","trace[1735472471] 'compare'  (duration: 58.263046ms)","trace[1735472471] 'attach lease to kv pair' {req_type:put; key:/registry/events/kube-system/storage-provisioner.173d519b908cf739; req_size:839; } (duration: 20.711673ms)"],"step_count":3}
	{"level":"info","ts":"2023-01-24T18:19:50.802Z","caller":"traceutil/trace.go:171","msg":"trace[701174296] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"175.723973ms","start":"2023-01-24T18:19:50.626Z","end":"2023-01-24T18:19:50.802Z","steps":["trace[701174296] 'process raft request'  (duration: 102.45577ms)","trace[701174296] 'compare'  (duration: 73.17858ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  18:19:51 up  1:19,  0 users,  load average: 2.45, 2.14, 1.71
	Linux kubernetes-upgrade-582000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [018678aebe0e] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0124 18:19:25.730849       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0124 18:19:25.730863       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0124 18:19:25.731293       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [c3b605e1cb6d] <==
	* I0124 18:19:45.574126       1 controller.go:121] Starting legacy_token_tracking_controller
	I0124 18:19:45.574166       1 shared_informer.go:273] Waiting for caches to sync for configmaps
	I0124 18:19:45.574390       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0124 18:19:45.575496       1 controller.go:85] Starting OpenAPI V3 controller
	I0124 18:19:45.575518       1 naming_controller.go:291] Starting NamingConditionController
	I0124 18:19:45.575534       1 establishing_controller.go:76] Starting EstablishingController
	I0124 18:19:45.575543       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0124 18:19:45.575592       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0124 18:19:45.575605       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0124 18:19:45.574003       1 customresource_discovery_controller.go:288] Starting DiscoveryController
	I0124 18:19:45.584428       1 controller.go:85] Starting OpenAPI controller
	E0124 18:19:45.625655       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0124 18:19:45.627932       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0124 18:19:45.654830       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0124 18:19:45.672049       1 cache.go:39] Caches are synced for autoregister controller
	I0124 18:19:45.672055       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0124 18:19:45.672269       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0124 18:19:45.672280       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0124 18:19:45.672594       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0124 18:19:45.672855       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0124 18:19:45.674455       1 shared_informer.go:280] Caches are synced for configmaps
	I0124 18:19:45.674510       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0124 18:19:46.356011       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0124 18:19:46.577218       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0124 18:19:48.327552       1 controller.go:615] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [016547c3a4cc] <==
	* I0124 18:19:47.151087       1 controllermanager.go:622] Started "bootstrapsigner"
	I0124 18:19:47.151122       1 shared_informer.go:273] Waiting for caches to sync for bootstrap_signer
	I0124 18:19:47.154795       1 controllermanager.go:622] Started "persistentvolume-binder"
	I0124 18:19:47.154981       1 pv_controller_base.go:318] Starting persistent volume controller
	I0124 18:19:47.155020       1 shared_informer.go:273] Waiting for caches to sync for persistent volume
	I0124 18:19:47.157712       1 controllermanager.go:622] Started "tokencleaner"
	I0124 18:19:47.157781       1 tokencleaner.go:111] Starting token cleaner controller
	I0124 18:19:47.158177       1 shared_informer.go:273] Waiting for caches to sync for token_cleaner
	I0124 18:19:47.158649       1 shared_informer.go:280] Caches are synced for token_cleaner
	I0124 18:19:47.180197       1 controllermanager.go:622] Started "namespace"
	I0124 18:19:47.180402       1 namespace_controller.go:195] Starting namespace controller
	I0124 18:19:47.180451       1 shared_informer.go:273] Waiting for caches to sync for namespace
	I0124 18:19:47.188178       1 controllermanager.go:622] Started "serviceaccount"
	I0124 18:19:47.188247       1 serviceaccounts_controller.go:111] Starting service account controller
	I0124 18:19:47.188889       1 shared_informer.go:273] Waiting for caches to sync for service account
	I0124 18:19:47.192808       1 shared_informer.go:280] Caches are synced for tokens
	I0124 18:19:47.198941       1 controllermanager.go:622] Started "garbagecollector"
	I0124 18:19:47.199190       1 garbagecollector.go:154] Starting garbage collector controller
	I0124 18:19:47.199242       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0124 18:19:47.199415       1 graph_builder.go:291] GraphBuilder running
	I0124 18:19:47.203977       1 controllermanager.go:622] Started "csrapproving"
	I0124 18:19:47.204214       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0124 18:19:47.204229       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving
	I0124 18:19:47.207081       1 controllermanager.go:622] Started "csrcleaner"
	I0124 18:19:47.207156       1 cleaner.go:82] Starting CSR cleaner controller
	
	* 
	* ==> kube-controller-manager [f6401a9dc568] <==
	* I0124 18:19:23.736502       1 shared_informer.go:280] Caches are synced for daemon sets
	I0124 18:19:23.742791       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0124 18:19:23.747173       1 shared_informer.go:280] Caches are synced for job
	I0124 18:19:23.760357       1 shared_informer.go:280] Caches are synced for deployment
	I0124 18:19:23.762574       1 shared_informer.go:280] Caches are synced for GC
	I0124 18:19:23.762660       1 shared_informer.go:280] Caches are synced for endpoint
	I0124 18:19:23.762728       1 shared_informer.go:280] Caches are synced for PVC protection
	I0124 18:19:23.768089       1 shared_informer.go:280] Caches are synced for resource quota
	I0124 18:19:23.778571       1 shared_informer.go:280] Caches are synced for HPA
	I0124 18:19:23.779858       1 shared_informer.go:280] Caches are synced for disruption
	I0124 18:19:23.786560       1 shared_informer.go:280] Caches are synced for taint
	I0124 18:19:23.786646       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	I0124 18:19:23.786774       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0124 18:19:23.786845       1 taint_manager.go:211] "Sending events to api server"
	I0124 18:19:23.787206       1 event.go:294] "Event occurred" object="kubernetes-upgrade-582000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node kubernetes-upgrade-582000 event: Registered Node kubernetes-upgrade-582000 in Controller"
	W0124 18:19:23.792622       1 node_lifecycle_controller.go:1053] Missing timestamp for Node kubernetes-upgrade-582000. Assuming now as a timestamp.
	I0124 18:19:23.792682       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0124 18:19:23.795476       1 shared_informer.go:280] Caches are synced for persistent volume
	I0124 18:19:23.802661       1 shared_informer.go:280] Caches are synced for resource quota
	I0124 18:19:24.117422       1 shared_informer.go:280] Caches are synced for garbage collector
	I0124 18:19:24.117562       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0124 18:19:24.119797       1 shared_informer.go:280] Caches are synced for garbage collector
	I0124 18:19:24.127187       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dt6r6"
	I0124 18:19:24.222253       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 1"
	I0124 18:19:24.369673       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-qdr4z"
	
	* 
	* ==> kube-proxy [277186b07ab8] <==
	* 
	* 
	* ==> kube-proxy [c88bc634635e] <==
	* E0124 18:19:43.940432       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-582000": dial tcp 192.168.76.2:8443: connect: connection refused
	I0124 18:19:45.632965       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0124 18:19:45.633052       1 server_others.go:109] "Detected node IP" address="192.168.76.2"
	I0124 18:19:45.633107       1 server_others.go:535] "Using iptables proxy"
	I0124 18:19:45.663864       1 server_others.go:176] "Using iptables Proxier"
	I0124 18:19:45.663915       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0124 18:19:45.663923       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0124 18:19:45.663937       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0124 18:19:45.663990       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0124 18:19:45.664454       1 server.go:655] "Version info" version="v1.26.1"
	I0124 18:19:45.664490       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0124 18:19:45.665374       1 config.go:317] "Starting service config controller"
	I0124 18:19:45.665409       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0124 18:19:45.665810       1 config.go:226] "Starting endpoint slice config controller"
	I0124 18:19:45.665872       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0124 18:19:45.667536       1 config.go:444] "Starting node config controller"
	I0124 18:19:45.667600       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0124 18:19:45.766082       1 shared_informer.go:280] Caches are synced for service config
	I0124 18:19:45.766140       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0124 18:19:45.767927       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0c74b13135e1] <==
	* W0124 18:19:43.366372       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0124 18:19:43.366448       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0124 18:19:43.409740       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0124 18:19:43.409799       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0124 18:19:43.443772       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0124 18:19:43.443823       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0124 18:19:43.544448       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0124 18:19:43.544475       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0124 18:19:43.564181       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0124 18:19:43.564272       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0124 18:19:43.729453       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0124 18:19:43.729518       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0124 18:19:45.601095       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0124 18:19:45.603475       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0124 18:19:45.603285       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0124 18:19:45.603804       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0124 18:19:45.603310       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0124 18:19:45.603854       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0124 18:19:45.603351       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0124 18:19:45.603866       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0124 18:19:45.603380       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0124 18:19:45.603874       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0124 18:19:45.627839       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0124 18:19:45.627956       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0124 18:19:51.892211       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [9ec122e36dce] <==
	* W0124 18:19:08.743661       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0124 18:19:08.743683       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0124 18:19:08.744437       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0124 18:19:08.744457       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0124 18:19:08.744888       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0124 18:19:08.745005       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0124 18:19:08.744893       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0124 18:19:08.745123       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0124 18:19:08.745532       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0124 18:19:08.745683       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0124 18:19:09.641388       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0124 18:19:09.641447       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0124 18:19:09.657145       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0124 18:19:09.657193       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0124 18:19:09.718173       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0124 18:19:09.718234       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0124 18:19:09.823000       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0124 18:19:09.823103       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0124 18:19:09.883637       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0124 18:19:09.883685       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0124 18:19:12.930696       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0124 18:19:24.722660       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0124 18:19:24.722953       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0124 18:19:24.723365       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0124 18:19:24.723495       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-01-24 18:18:26 UTC, end at Tue 2023-01-24 18:19:53 UTC. --
	Jan 24 18:19:39 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:39.894787    1638 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-582000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-582000?resourceVersion=0&timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:39 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:39.894931    1638 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-582000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-582000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:39 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:39.895041    1638 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-582000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-582000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:39 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:39.895142    1638 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-582000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-582000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:39 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:39.895241    1638 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-582000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-582000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:39 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:39.895252    1638 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 24 18:19:42 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:42.045375    1638 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-582000?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 24 18:19:43 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:43.494415    1638 status_manager.go:698] "Failed to get status for pod" podUID=065fb30b29230ed9f0ea3cc76b42f844 pod="kube-system/kube-apiserver-kubernetes-upgrade-582000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-582000\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:43 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:43.495140    1638 status_manager.go:698] "Failed to get status for pod" podUID=065fb30b29230ed9f0ea3cc76b42f844 pod="kube-system/kube-apiserver-kubernetes-upgrade-582000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-582000\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:43 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:43.495513    1638 status_manager.go:698] "Failed to get status for pod" podUID=ef4d3562801aa17aa5e39f0b5ed2b62c pod="kube-system/kube-scheduler-kubernetes-upgrade-582000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-582000\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:43 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:43.501781    1638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8a2a8239c58a705f3be281a98be3f4e51ed178aa4106e99581f6c422762f47"
	Jan 24 18:19:43 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:43.501827    1638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82d8a010f843f65d0e6185855d1c153f2b57f00eb08604fe5b62012f5a2430e0"
	Jan 24 18:19:43 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:43.501846    1638 status_manager.go:305] "Container startup changed for unknown container" pod="kube-system/kube-scheduler-kubernetes-upgrade-582000" containerID="docker://9ec122e36dce6ca1ec8abc21ce10263cae6ddf577f072460eb0c4c40185cb30c"
	Jan 24 18:19:43 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:43.502230    1638 status_manager.go:698] "Failed to get status for pod" podUID=811973a03fba7129c2b21bbbdbae7cec pod="kube-system/kube-controller-manager-kubernetes-upgrade-582000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-582000\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Jan 24 18:19:45 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:45.577491    1638 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jan 24 18:19:49 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:49.475138    1638 scope.go:115] "RemoveContainer" containerID="4d4b6c0ef95b02f0da68e8f5b2e97d07a31e4c9579e4eb7b3dff59b5543154ed"
	Jan 24 18:19:49 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:49.475522    1638 scope.go:115] "RemoveContainer" containerID="132b524e926045f4f93a206aa0dbd1564d4e4d25cb374470b31715d776fc2afc"
	Jan 24 18:19:49 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:49.475749    1638 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(868827b1-6c6f-4db3-b432-46e134abe72e)\"" pod="kube-system/storage-provisioner" podUID=868827b1-6c6f-4db3-b432-46e134abe72e
	Jan 24 18:19:49 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:49.854886    1638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dt6r6" podStartSLOduration=25.854856444 pod.CreationTimestamp="2023-01-24 18:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 18:19:45.698881558 +0000 UTC m=+49.405199037" watchObservedRunningTime="2023-01-24 18:19:49.854856444 +0000 UTC m=+53.561173926"
	Jan 24 18:19:49 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:49.855062    1638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-qdr4z" podStartSLOduration=25.855042335 pod.CreationTimestamp="2023-01-24 18:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 18:19:46.64467544 +0000 UTC m=+50.350992920" watchObservedRunningTime="2023-01-24 18:19:49.855042335 +0000 UTC m=+53.561359817"
	Jan 24 18:19:49 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:49.954092    1638 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 24 18:19:49 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:49.955425    1638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 24 18:19:50 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:50.488249    1638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d4b6c0ef95b02f0da68e8f5b2e97d07a31e4c9579e4eb7b3dff59b5543154ed"
	Jan 24 18:19:50 kubernetes-upgrade-582000 kubelet[1638]: I0124 18:19:50.488409    1638 scope.go:115] "RemoveContainer" containerID="132b524e926045f4f93a206aa0dbd1564d4e4d25cb374470b31715d776fc2afc"
	Jan 24 18:19:50 kubernetes-upgrade-582000 kubelet[1638]: E0124 18:19:50.488520    1638 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(868827b1-6c6f-4db3-b432-46e134abe72e)\"" pod="kube-system/storage-provisioner" podUID=868827b1-6c6f-4db3-b432-46e134abe72e
	
	* 
	* ==> storage-provisioner [132b524e9260] <==
	* I0124 18:19:43.872785       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0124 18:19:48.927944       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-582000 -n kubernetes-upgrade-582000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-582000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-582000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-582000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-582000: (3.350674246s)
--- FAIL: TestKubernetesUpgrade (345.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (51.51s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2247941676.exe start -p missing-upgrade-769000 --memory=2200 --driver=docker 
E0124 10:13:34.542798    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2247941676.exe start -p missing-upgrade-769000 --memory=2200 --driver=docker : exit status 78 (38.112660912s)

                                                
                                                
-- stdout --
	* [missing-upgrade-769000] minikube v1.9.1 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-769000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-769000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.01 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 256.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 434.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:13:37.333245781 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-769000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:13:56.917777180 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2247941676.exe start -p missing-upgrade-769000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2247941676.exe start -p missing-upgrade-769000 --memory=2200 --driver=docker : exit status 70 (4.097218793s)

                                                
                                                
-- stdout --
	* [missing-upgrade-769000] minikube v1.9.1 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-769000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-769000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2247941676.exe start -p missing-upgrade-769000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2247941676.exe start -p missing-upgrade-769000 --memory=2200 --driver=docker : exit status 70 (3.845772252s)

                                                
                                                
-- stdout --
	* [missing-upgrade-769000] minikube v1.9.1 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-769000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-769000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-01-24 10:14:09.389793 -0800 PST m=+2807.305180874
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-769000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-769000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e756c797e438ac708d94049a12294a1636174464f8872724bdadbf8dd6ed6a2",
	        "Created": "2023-01-24T18:13:45.521392185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188499,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:13:45.798382887Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/4e756c797e438ac708d94049a12294a1636174464f8872724bdadbf8dd6ed6a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e756c797e438ac708d94049a12294a1636174464f8872724bdadbf8dd6ed6a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e756c797e438ac708d94049a12294a1636174464f8872724bdadbf8dd6ed6a2/hosts",
	        "LogPath": "/var/lib/docker/containers/4e756c797e438ac708d94049a12294a1636174464f8872724bdadbf8dd6ed6a2/4e756c797e438ac708d94049a12294a1636174464f8872724bdadbf8dd6ed6a2-json.log",
	        "Name": "/missing-upgrade-769000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-769000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e1d8af741f912b252da29f73fd0a8ba00ede858c262d6c824e035a73fd32912-init/diff:/var/lib/docker/overlay2/ca806d60e032750fb69c7badf9f4997738a2b81bfb5912b54cd5771a42db76fb/diff:/var/lib/docker/overlay2/f847d4e750bb400c91b1e043964825c29b0b02878265fe3437ace783c0712621/diff:/var/lib/docker/overlay2/4c97a08be5febfd68e8a30db90d194f25241e8ad94e921091bca8a86e18f4020/diff:/var/lib/docker/overlay2/f16e66080f2657efe212351eca7691357c2c35eed6f7d10b112bdb808bae64b2/diff:/var/lib/docker/overlay2/29f6572e606a68090178ad0b8c1d4a153d4a0a3e98998b3280dde542be76d182/diff:/var/lib/docker/overlay2/0b174254b71da2f3574dfd9bc32cede212a40e18450b398a31aad42a33a1c7f5/diff:/var/lib/docker/overlay2/0b69634403c40116f7e58cb5aeba20851b0d7b04ea854ca408253a60195221b8/diff:/var/lib/docker/overlay2/5f290bcce646f39d9d36f5b59646d810b99e6b181a202c5cca8de134766409d8/diff:/var/lib/docker/overlay2/1892803dde720d65fe83ad06603b2152947fb8e51498cfa60b6165818b4afb8a/diff:/var/lib/docker/overlay2/615000
eef3f5ae8d73ac8452093b02982e02ec58d986eaaa5b0735f93e7b6c5a/diff:/var/lib/docker/overlay2/1a0653b9c57e0a0914a73ebec708611bf6114e3b76e0900f4b3382f0271dcb64/diff:/var/lib/docker/overlay2/9827d070845e9b92e8743a6bd04853deef8be0735f2db07b295da62f705b5676/diff:/var/lib/docker/overlay2/ecfee87b2cee453136254a0bfcb04c67d2f5ac08551c945572ccd88e4c59dba6/diff:/var/lib/docker/overlay2/0fbcfbbd0ee3907cd6f3f2e6f5b91a767510ed7084ded0baf9bc0bb8434d29ad/diff:/var/lib/docker/overlay2/4eb02c490590e7145551663a67911cffb680684879f2f62fb9dd2f736dac3b28/diff:/var/lib/docker/overlay2/f4822dbc23ec4b5e78792134884b0efe65f2ebdacbe3ca11a3fe9979bdd15a7f/diff:/var/lib/docker/overlay2/f66796a2193c86e4e1f981689f0fd89b72dc677f64b031bcaf1af0cdea18d512/diff:/var/lib/docker/overlay2/5323d5dd426ededf38f055e25b41543a6804425414f034f9a0bf5773a74628dc/diff:/var/lib/docker/overlay2/365d84259e2bdad489386d4231d1a2e1f448e81671be77cb2fc783044848db81/diff:/var/lib/docker/overlay2/7b3dda520b15efcd08159c354a046f27e7983dc76b79c0a66fefdb4e42fdf84e/diff:/var/lib/d
ocker/overlay2/6dba9057eb22747b9d81dd7dfa015221acb95ece58e002f2e5d44ac4530c3d5f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e1d8af741f912b252da29f73fd0a8ba00ede858c262d6c824e035a73fd32912/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e1d8af741f912b252da29f73fd0a8ba00ede858c262d6c824e035a73fd32912/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e1d8af741f912b252da29f73fd0a8ba00ede858c262d6c824e035a73fd32912/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-769000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-769000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-769000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-769000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-769000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f8b4af7d9fffd659938ef00bd6f1a8a15540889a3ec6dce14a8a1e112836c8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53010"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53011"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53012"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0f8b4af7d9ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "e80366fe678c813dce22f3eda56218cf5f2435e537b770a4f2f7e27820dd0cab",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "996de8931b3d63a4fb143a9391d4bc914f95c26bea12eda8578a4dc9773b702e",
	                    "EndpointID": "e80366fe678c813dce22f3eda56218cf5f2435e537b770a4f2f7e27820dd0cab",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-769000 -n missing-upgrade-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-769000 -n missing-upgrade-769000: exit status 6 (385.035516ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:14:09.822625   17907 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-769000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-769000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-769000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-769000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-769000: (2.333651114s)
--- FAIL: TestMissingContainerUpgrade (51.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3991062698.exe start -p stopped-upgrade-419000 --memory=2200 --vm-driver=docker 
E0124 10:15:37.534841    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:15:48.226853    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3991062698.exe start -p stopped-upgrade-419000 --memory=2200 --vm-driver=docker : exit status 70 (38.789338032s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-419000] minikube v1.9.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1885979250
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:15:32.226297480 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-419000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:15:51.820298610 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-419000", then "minikube start -p stopped-upgrade-419000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 100.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 242.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 496.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:15:51.820298610 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3991062698.exe start -p stopped-upgrade-419000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3991062698.exe start -p stopped-upgrade-419000 --memory=2200 --vm-driver=docker : exit status 70 (4.302055862s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-419000] minikube v1.9.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3121939237
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-419000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3991062698.exe start -p stopped-upgrade-419000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3991062698.exe start -p stopped-upgrade-419000 --memory=2200 --vm-driver=docker : exit status 70 (4.385798533s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-419000] minikube v1.9.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3801467504
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-419000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (49.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (252.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-115000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0124 10:28:36.550325    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-115000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m12.140509612s)

                                                
                                                
-- stdout --
	* [old-k8s-version-115000] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-115000 in cluster old-k8s-version-115000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 10:28:31.509172   25956 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:28:31.509331   25956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:28:31.509336   25956 out.go:309] Setting ErrFile to fd 2...
	I0124 10:28:31.509342   25956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:28:31.509454   25956 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:28:31.509986   25956 out.go:303] Setting JSON to false
	I0124 10:28:31.528751   25956 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5286,"bootTime":1674579625,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 10:28:31.528848   25956 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 10:28:31.549505   25956 out.go:177] * [old-k8s-version-115000] minikube v1.28.0 on Darwin 13.1
	I0124 10:28:31.592208   25956 notify.go:220] Checking for updates...
	I0124 10:28:31.613226   25956 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 10:28:31.672190   25956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:28:31.729964   25956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 10:28:31.805404   25956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 10:28:31.865265   25956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 10:28:31.924057   25956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 10:28:31.961811   25956 config.go:180] Loaded profile config "false-129000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:28:31.961898   25956 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 10:28:32.029001   25956 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 10:28:32.029148   25956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:28:32.187098   25956 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:28:32.085461027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:28:32.230104   25956 out.go:177] * Using the docker driver based on user configuration
	I0124 10:28:32.251051   25956 start.go:296] selected driver: docker
	I0124 10:28:32.251067   25956 start.go:840] validating driver "docker" against <nil>
	I0124 10:28:32.251080   25956 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 10:28:32.253997   25956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:28:32.399433   25956 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:28:32.30354817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:28:32.399574   25956 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0124 10:28:32.399713   25956 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0124 10:28:32.422662   25956 out.go:177] * Using Docker Desktop driver with root privileges
	I0124 10:28:32.444135   25956 cni.go:84] Creating CNI manager for ""
	I0124 10:28:32.444176   25956 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 10:28:32.444194   25956 start_flags.go:319] config:
	{Name:old-k8s-version-115000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:28:32.487341   25956 out.go:177] * Starting control plane node old-k8s-version-115000 in cluster old-k8s-version-115000
	I0124 10:28:32.508498   25956 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 10:28:32.531521   25956 out.go:177] * Pulling base image ...
	I0124 10:28:32.574302   25956 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:28:32.574308   25956 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 10:28:32.574357   25956 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0124 10:28:32.574372   25956 cache.go:57] Caching tarball of preloaded images
	I0124 10:28:32.574520   25956 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 10:28:32.574530   25956 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0124 10:28:32.574969   25956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/config.json ...
	I0124 10:28:32.575117   25956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/config.json: {Name:mk1b224550e635a9178b3a0ce48d666e9f326f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:32.633751   25956 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 10:28:32.633778   25956 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 10:28:32.633795   25956 cache.go:193] Successfully downloaded all kic artifacts
	I0124 10:28:32.633857   25956 start.go:364] acquiring machines lock for old-k8s-version-115000: {Name:mk8bd7ad2f5bf8d8f939782c15c7e824af20d268 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 10:28:32.634017   25956 start.go:368] acquired machines lock for "old-k8s-version-115000" in 147.544µs
	I0124 10:28:32.634045   25956 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-115000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 10:28:32.634098   25956 start.go:125] createHost starting for "" (driver="docker")
	I0124 10:28:32.656719   25956 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0124 10:28:32.657120   25956 start.go:159] libmachine.API.Create for "old-k8s-version-115000" (driver="docker")
	I0124 10:28:32.657155   25956 client.go:168] LocalClient.Create starting
	I0124 10:28:32.657311   25956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem
	I0124 10:28:32.657388   25956 main.go:141] libmachine: Decoding PEM data...
	I0124 10:28:32.657416   25956 main.go:141] libmachine: Parsing certificate...
	I0124 10:28:32.657510   25956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem
	I0124 10:28:32.657571   25956 main.go:141] libmachine: Decoding PEM data...
	I0124 10:28:32.657587   25956 main.go:141] libmachine: Parsing certificate...
	I0124 10:28:32.685230   25956 cli_runner.go:164] Run: docker network inspect old-k8s-version-115000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0124 10:28:32.741830   25956 cli_runner.go:211] docker network inspect old-k8s-version-115000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0124 10:28:32.741934   25956 network_create.go:281] running [docker network inspect old-k8s-version-115000] to gather additional debugging logs...
	I0124 10:28:32.741954   25956 cli_runner.go:164] Run: docker network inspect old-k8s-version-115000
	W0124 10:28:32.798322   25956 cli_runner.go:211] docker network inspect old-k8s-version-115000 returned with exit code 1
	I0124 10:28:32.798354   25956 network_create.go:284] error running [docker network inspect old-k8s-version-115000]: docker network inspect old-k8s-version-115000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-115000
	I0124 10:28:32.798369   25956 network_create.go:286] output of [docker network inspect old-k8s-version-115000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-115000
	
	** /stderr **
	I0124 10:28:32.798447   25956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0124 10:28:32.855534   25956 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:28:32.857110   25956 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:28:32.858651   25956 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0124 10:28:32.858940   25956 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012892c0}
	I0124 10:28:32.858950   25956 network_create.go:123] attempt to create docker network old-k8s-version-115000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0124 10:28:32.859022   25956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-115000 old-k8s-version-115000
	I0124 10:28:32.947074   25956 network_create.go:107] docker network old-k8s-version-115000 192.168.76.0/24 created
	I0124 10:28:32.947111   25956 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-115000" container
	I0124 10:28:32.947234   25956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0124 10:28:33.004979   25956 cli_runner.go:164] Run: docker volume create old-k8s-version-115000 --label name.minikube.sigs.k8s.io=old-k8s-version-115000 --label created_by.minikube.sigs.k8s.io=true
	I0124 10:28:33.060648   25956 oci.go:103] Successfully created a docker volume old-k8s-version-115000
	I0124 10:28:33.060783   25956 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-115000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-115000 --entrypoint /usr/bin/test -v old-k8s-version-115000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0124 10:28:33.536323   25956 oci.go:107] Successfully prepared a docker volume old-k8s-version-115000
	I0124 10:28:33.536380   25956 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:28:33.536394   25956 kic.go:190] Starting extracting preloaded images to volume ...
	I0124 10:28:33.536516   25956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-115000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0124 10:28:39.474451   25956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-115000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (5.937818048s)
	I0124 10:28:39.474475   25956 kic.go:199] duration metric: took 5.938043 seconds to extract preloaded images to volume
	I0124 10:28:39.474608   25956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0124 10:28:39.626184   25956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-115000 --name old-k8s-version-115000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-115000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-115000 --network old-k8s-version-115000 --ip 192.168.76.2 --volume old-k8s-version-115000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0124 10:28:40.037774   25956 cli_runner.go:164] Run: docker container inspect old-k8s-version-115000 --format={{.State.Running}}
	I0124 10:28:40.113116   25956 cli_runner.go:164] Run: docker container inspect old-k8s-version-115000 --format={{.State.Status}}
	I0124 10:28:40.188282   25956 cli_runner.go:164] Run: docker exec old-k8s-version-115000 stat /var/lib/dpkg/alternatives/iptables
	I0124 10:28:40.349124   25956 oci.go:144] the created container "old-k8s-version-115000" has a running status.
	I0124 10:28:40.349156   25956 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa...
	I0124 10:28:40.464599   25956 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0124 10:28:40.588354   25956 cli_runner.go:164] Run: docker container inspect old-k8s-version-115000 --format={{.State.Status}}
	I0124 10:28:40.651908   25956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0124 10:28:40.651926   25956 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-115000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0124 10:28:40.764895   25956 cli_runner.go:164] Run: docker container inspect old-k8s-version-115000 --format={{.State.Status}}
	I0124 10:28:40.822944   25956 machine.go:88] provisioning docker machine ...
	I0124 10:28:40.822986   25956 ubuntu.go:169] provisioning hostname "old-k8s-version-115000"
	I0124 10:28:40.823082   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:40.915598   25956 main.go:141] libmachine: Using SSH client type: native
	I0124 10:28:40.915795   25956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55234 <nil> <nil>}
	I0124 10:28:40.915811   25956 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-115000 && echo "old-k8s-version-115000" | sudo tee /etc/hostname
	I0124 10:28:41.056408   25956 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-115000
	
	I0124 10:28:41.056530   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:41.116723   25956 main.go:141] libmachine: Using SSH client type: native
	I0124 10:28:41.116883   25956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55234 <nil> <nil>}
	I0124 10:28:41.116896   25956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-115000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-115000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-115000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 10:28:41.250887   25956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:28:41.250907   25956 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 10:28:41.250923   25956 ubuntu.go:177] setting up certificates
	I0124 10:28:41.250931   25956 provision.go:83] configureAuth start
	I0124 10:28:41.251003   25956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-115000
	I0124 10:28:41.309764   25956 provision.go:138] copyHostCerts
	I0124 10:28:41.309860   25956 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 10:28:41.309867   25956 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 10:28:41.309996   25956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 10:28:41.310214   25956 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 10:28:41.310220   25956 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 10:28:41.310282   25956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 10:28:41.310425   25956 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 10:28:41.310430   25956 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 10:28:41.310490   25956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 10:28:41.310601   25956 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-115000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-115000]
	I0124 10:28:41.368230   25956 provision.go:172] copyRemoteCerts
	I0124 10:28:41.368287   25956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 10:28:41.368345   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:41.428090   25956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55234 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:28:41.523472   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 10:28:41.550928   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0124 10:28:41.567962   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0124 10:28:41.584879   25956 provision.go:86] duration metric: configureAuth took 333.93455ms
	I0124 10:28:41.584892   25956 ubuntu.go:193] setting minikube options for container-runtime
	I0124 10:28:41.585557   25956 config.go:180] Loaded profile config "old-k8s-version-115000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0124 10:28:41.585706   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:41.643781   25956 main.go:141] libmachine: Using SSH client type: native
	I0124 10:28:41.643929   25956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55234 <nil> <nil>}
	I0124 10:28:41.643941   25956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 10:28:41.778757   25956 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 10:28:41.778775   25956 ubuntu.go:71] root file system type: overlay
	I0124 10:28:41.778923   25956 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 10:28:41.779003   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:41.837952   25956 main.go:141] libmachine: Using SSH client type: native
	I0124 10:28:41.838119   25956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55234 <nil> <nil>}
	I0124 10:28:41.838174   25956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 10:28:41.979942   25956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 10:28:41.980068   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:42.037635   25956 main.go:141] libmachine: Using SSH client type: native
	I0124 10:28:42.037806   25956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55234 <nil> <nil>}
	I0124 10:28:42.037819   25956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 10:28:42.669952   25956 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-24 18:28:41.977119040 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0124 10:28:42.669970   25956 machine.go:91] provisioned docker machine in 1.84699497s
	I0124 10:28:42.669977   25956 client.go:171] LocalClient.Create took 10.012747664s
	I0124 10:28:42.669996   25956 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-115000" took 10.012810568s
	I0124 10:28:42.670007   25956 start.go:300] post-start starting for "old-k8s-version-115000" (driver="docker")
	I0124 10:28:42.670014   25956 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 10:28:42.670096   25956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 10:28:42.670157   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:42.730379   25956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55234 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:28:42.823846   25956 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 10:28:42.827462   25956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 10:28:42.827480   25956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 10:28:42.827487   25956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 10:28:42.827493   25956 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 10:28:42.827501   25956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 10:28:42.827595   25956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 10:28:42.827760   25956 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 10:28:42.827948   25956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 10:28:42.835536   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:28:42.853181   25956 start.go:303] post-start completed in 183.159278ms
	I0124 10:28:42.853729   25956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-115000
	I0124 10:28:42.918610   25956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/config.json ...
	I0124 10:28:42.919071   25956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 10:28:42.919162   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:42.980373   25956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55234 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:28:43.071582   25956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 10:28:43.076280   25956 start.go:128] duration metric: createHost completed in 10.442104059s
	I0124 10:28:43.076304   25956 start.go:83] releasing machines lock for "old-k8s-version-115000", held for 10.442209882s
	I0124 10:28:43.076396   25956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-115000
	I0124 10:28:43.134272   25956 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0124 10:28:43.134274   25956 ssh_runner.go:195] Run: cat /version.json
	I0124 10:28:43.134368   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:43.134376   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:43.198090   25956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55234 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:28:43.198120   25956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55234 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:28:43.290386   25956 ssh_runner.go:195] Run: systemctl --version
	I0124 10:28:43.496295   25956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 10:28:43.501423   25956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 10:28:43.521138   25956 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 10:28:43.521215   25956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0124 10:28:43.535188   25956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0124 10:28:43.543087   25956 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0124 10:28:43.543108   25956 start.go:472] detecting cgroup driver to use...
	I0124 10:28:43.543128   25956 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:28:43.543249   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:28:43.556895   25956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0124 10:28:43.565678   25956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 10:28:43.573953   25956 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 10:28:43.574017   25956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 10:28:43.582941   25956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:28:43.591850   25956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 10:28:43.600992   25956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:28:43.610293   25956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 10:28:43.618455   25956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 10:28:43.627536   25956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 10:28:43.634951   25956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 10:28:43.642995   25956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:28:43.722565   25956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 10:28:43.804041   25956 start.go:472] detecting cgroup driver to use...
	I0124 10:28:43.804075   25956 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:28:43.804167   25956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 10:28:43.818062   25956 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 10:28:43.818147   25956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 10:28:43.836624   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:28:43.860493   25956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 10:28:43.977880   25956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 10:28:44.090534   25956 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 10:28:44.090557   25956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 10:28:44.119401   25956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:28:44.215890   25956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 10:28:44.516323   25956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:28:44.551647   25956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:28:44.632845   25956 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	I0124 10:28:44.632935   25956 cli_runner.go:164] Run: docker exec -t old-k8s-version-115000 dig +short host.docker.internal
	I0124 10:28:44.751796   25956 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:28:44.751939   25956 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:28:44.757524   25956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:28:44.773121   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:28:44.836559   25956 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:28:44.836636   25956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:28:44.863178   25956 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0124 10:28:44.863197   25956 docker.go:560] Images already preloaded, skipping extraction
	I0124 10:28:44.863272   25956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:28:44.889038   25956 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0124 10:28:44.889055   25956 cache_images.go:84] Images are preloaded, skipping loading
	I0124 10:28:44.889163   25956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 10:28:44.964631   25956 cni.go:84] Creating CNI manager for ""
	I0124 10:28:44.964647   25956 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 10:28:44.964665   25956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 10:28:44.964679   25956 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-115000 NodeName:old-k8s-version-115000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 10:28:44.964816   25956 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-115000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-115000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 10:28:44.964891   25956 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-115000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 10:28:44.964953   25956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0124 10:28:44.973216   25956 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 10:28:44.973294   25956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 10:28:44.980952   25956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0124 10:28:44.994090   25956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 10:28:45.007247   25956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0124 10:28:45.020177   25956 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0124 10:28:45.024168   25956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:28:45.034366   25956 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000 for IP: 192.168.76.2
	I0124 10:28:45.034385   25956 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:45.034558   25956 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 10:28:45.034621   25956 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 10:28:45.034662   25956 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/client.key
	I0124 10:28:45.034675   25956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/client.crt with IP's: []
	I0124 10:28:45.112842   25956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/client.crt ...
	I0124 10:28:45.112857   25956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/client.crt: {Name:mk9d3eee06bd6f4f151bbbaa90edb30e08bcbe5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:45.113155   25956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/client.key ...
	I0124 10:28:45.113163   25956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/client.key: {Name:mkdbf9876ce64ae9c5554d1fb1033561547482d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:45.113366   25956 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key.31bdca25
	I0124 10:28:45.113381   25956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0124 10:28:45.433293   25956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.crt.31bdca25 ...
	I0124 10:28:45.433316   25956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.crt.31bdca25: {Name:mk94fb23ee75d9b0b85c00eba728838de3dd0246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:45.433625   25956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key.31bdca25 ...
	I0124 10:28:45.433634   25956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key.31bdca25: {Name:mkbcf57a010e8df6693d6223d5b3b8c6c32f2c82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:45.433823   25956 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.crt
	I0124 10:28:45.433986   25956 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key
	I0124 10:28:45.434139   25956 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.key
	I0124 10:28:45.434153   25956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.crt with IP's: []
	I0124 10:28:45.561859   25956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.crt ...
	I0124 10:28:45.561874   25956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.crt: {Name:mkd987f474d8fdeaddffca722d43ad9c1a12b0aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:45.562189   25956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.key ...
	I0124 10:28:45.562201   25956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.key: {Name:mkf7b914a52218a7cad603a7dbf3ad9aa202ca32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:28:45.562608   25956 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 10:28:45.562657   25956 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 10:28:45.562668   25956 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 10:28:45.562701   25956 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 10:28:45.562733   25956 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 10:28:45.562765   25956 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 10:28:45.562838   25956 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:28:45.563364   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 10:28:45.585275   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0124 10:28:45.606440   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 10:28:45.629746   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0124 10:28:45.653638   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 10:28:45.678760   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 10:28:45.702054   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 10:28:45.723737   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 10:28:45.749172   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 10:28:45.769205   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 10:28:45.794492   25956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 10:28:45.816057   25956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 10:28:45.829982   25956 ssh_runner.go:195] Run: openssl version
	I0124 10:28:45.836811   25956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 10:28:45.853498   25956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 10:28:45.858237   25956 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 10:28:45.858308   25956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 10:28:45.864193   25956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 10:28:45.873849   25956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 10:28:45.883121   25956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:28:45.887881   25956 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:28:45.887935   25956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:28:45.893729   25956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 10:28:45.901996   25956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 10:28:45.910409   25956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 10:28:45.914442   25956 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 10:28:45.914487   25956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 10:28:45.919921   25956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 10:28:45.928883   25956 kubeadm.go:401] StartCluster: {Name:old-k8s-version-115000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:28:45.929018   25956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:28:45.954657   25956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 10:28:45.963391   25956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:28:45.971993   25956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:28:45.972065   25956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:28:45.982105   25956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:28:45.982146   25956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:28:46.032785   25956 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:28:46.032856   25956 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:28:46.364640   25956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:28:46.364735   25956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:28:46.364824   25956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:28:46.602030   25956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:28:46.603641   25956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:28:46.610082   25956 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:28:46.675538   25956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:28:46.695894   25956 out.go:204]   - Generating certificates and keys ...
	I0124 10:28:46.696092   25956 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:28:46.696207   25956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:28:46.843199   25956 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0124 10:28:47.058611   25956 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0124 10:28:47.194647   25956 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0124 10:28:47.436314   25956 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0124 10:28:47.645334   25956 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0124 10:28:47.645514   25956 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-115000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0124 10:28:47.766564   25956 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0124 10:28:47.766756   25956 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-115000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0124 10:28:47.821720   25956 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0124 10:28:47.963353   25956 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0124 10:28:48.089871   25956 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0124 10:28:48.089940   25956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:28:48.311646   25956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:28:48.371473   25956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:28:48.536180   25956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:28:48.660898   25956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:28:48.661999   25956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:28:48.690175   25956 out.go:204]   - Booting up control plane ...
	I0124 10:28:48.690268   25956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:28:48.690333   25956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:28:48.690408   25956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:28:48.690478   25956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:28:48.690595   25956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:29:28.671429   25956 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:29:28.671571   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:29:28.671798   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:29:33.672153   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:29:33.672465   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:29:43.673747   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:29:43.673987   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:30:03.674404   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:30:03.674557   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:30:43.675757   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:30:43.676052   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:30:43.676065   25956 kubeadm.go:322] 
	I0124 10:30:43.676106   25956 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:30:43.676144   25956 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:30:43.676152   25956 kubeadm.go:322] 
	I0124 10:30:43.676191   25956 kubeadm.go:322] This error is likely caused by:
	I0124 10:30:43.676233   25956 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:30:43.676343   25956 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:30:43.676356   25956 kubeadm.go:322] 
	I0124 10:30:43.676442   25956 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:30:43.676472   25956 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:30:43.676497   25956 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:30:43.676511   25956 kubeadm.go:322] 
	I0124 10:30:43.676631   25956 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:30:43.676749   25956 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:30:43.676841   25956 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:30:43.676896   25956 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:30:43.677011   25956 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:30:43.677083   25956 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:30:43.679698   25956 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:30:43.679767   25956 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:30:43.679883   25956 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:30:43.679993   25956 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:30:43.680088   25956 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:30:43.680153   25956 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0124 10:30:43.680337   25956 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-115000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-115000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-115000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-115000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0124 10:30:43.680365   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0124 10:30:44.103272   25956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:30:44.113371   25956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:30:44.113434   25956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:30:44.121269   25956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:30:44.121294   25956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:30:44.172739   25956 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:30:44.173988   25956 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:30:44.514627   25956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:30:44.514713   25956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:30:44.514801   25956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:30:44.760140   25956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:30:44.762715   25956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:30:44.770002   25956 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:30:44.841472   25956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:30:44.862026   25956 out.go:204]   - Generating certificates and keys ...
	I0124 10:30:44.862093   25956 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:30:44.862155   25956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:30:44.862231   25956 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 10:30:44.862287   25956 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 10:30:44.862347   25956 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 10:30:44.862397   25956 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 10:30:44.862453   25956 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 10:30:44.862502   25956 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 10:30:44.862557   25956 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 10:30:44.862615   25956 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 10:30:44.862646   25956 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 10:30:44.862703   25956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:30:45.013931   25956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:30:45.324651   25956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:30:45.584278   25956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:30:45.985031   25956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:30:45.985499   25956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:30:46.021164   25956 out.go:204]   - Booting up control plane ...
	I0124 10:30:46.021275   25956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:30:46.021376   25956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:30:46.021428   25956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:30:46.021487   25956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:30:46.021619   25956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:31:25.995033   25956 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:31:25.995566   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:31:25.995742   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:31:30.997508   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:31:30.997714   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:31:40.998319   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:31:40.998526   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:32:00.999849   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:32:01.000065   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:32:41.001697   25956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:32:41.001932   25956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:32:41.001946   25956 kubeadm.go:322] 
	I0124 10:32:41.001980   25956 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:32:41.002050   25956 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:32:41.002059   25956 kubeadm.go:322] 
	I0124 10:32:41.002100   25956 kubeadm.go:322] This error is likely caused by:
	I0124 10:32:41.002142   25956 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:32:41.002257   25956 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:32:41.002269   25956 kubeadm.go:322] 
	I0124 10:32:41.002382   25956 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:32:41.002445   25956 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:32:41.002490   25956 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:32:41.002501   25956 kubeadm.go:322] 
	I0124 10:32:41.002650   25956 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:32:41.002763   25956 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:32:41.002870   25956 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:32:41.002924   25956 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:32:41.003018   25956 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:32:41.003056   25956 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:32:41.006009   25956 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:32:41.006077   25956 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:32:41.006181   25956 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:32:41.006280   25956 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:32:41.006355   25956 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:32:41.006416   25956 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0124 10:32:41.006446   25956 kubeadm.go:403] StartCluster complete in 3m55.07605061s
	I0124 10:32:41.006530   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:32:41.029392   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.029406   25956 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:32:41.029475   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:32:41.053530   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.053544   25956 logs.go:281] No container was found matching "etcd"
	I0124 10:32:41.053626   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:32:41.076228   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.076240   25956 logs.go:281] No container was found matching "coredns"
	I0124 10:32:41.076310   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:32:41.099860   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.099874   25956 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:32:41.099952   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:32:41.122215   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.122228   25956 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:32:41.122297   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:32:41.145143   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.145159   25956 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:32:41.145226   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:32:41.169703   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.169717   25956 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:32:41.169786   25956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:32:41.194802   25956 logs.go:279] 0 containers: []
	W0124 10:32:41.194815   25956 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:32:41.194831   25956 logs.go:124] Gathering logs for container status ...
	I0124 10:32:41.194839   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:32:43.245535   25956 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050668154s)
	I0124 10:32:43.245690   25956 logs.go:124] Gathering logs for kubelet ...
	I0124 10:32:43.245700   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:32:43.282200   25956 logs.go:124] Gathering logs for dmesg ...
	I0124 10:32:43.282216   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:32:43.294630   25956 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:32:43.294644   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:32:43.351069   25956 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:32:43.351082   25956 logs.go:124] Gathering logs for Docker ...
	I0124 10:32:43.351089   25956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0124 10:32:43.368000   25956 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0124 10:32:43.368021   25956 out.go:239] * 
	* 
	W0124 10:32:43.368135   25956 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:32:43.368152   25956 out.go:239] * 
	* 
	W0124 10:32:43.368877   25956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 10:32:43.453717   25956 out.go:177] 
	W0124 10:32:43.496559   25956 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:32:43.496727   25956 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0124 10:32:43.496800   25956 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0124 10:32:43.518720   25956 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-115000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-115000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-115000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7",
	        "Created": "2023-01-24T18:28:39.694404231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:28:40.028695555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7-json.log",
	        "Name": "/old-k8s-version-115000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-115000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-115000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-115000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-115000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-115000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa575f37259f15aad30f1e98a1cee32e0ee3d18a75fba4a7e1f83ceb54a22cc0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55234"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55235"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55236"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55232"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55233"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aa575f37259f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-115000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b483b8467",
	                        "old-k8s-version-115000"
	                    ],
	                    "NetworkID": "5ad39a0309903e0f7f41a6f2aca4e1831033f2fa0e547dc51ca28338a4ce6eed",
	                    "EndpointID": "d1a1a9e88cb895f566dfbf35e730e63d180921fd3d8f1717d0eebc147c038084",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 6 (419.923984ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:32:44.095550   27086 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-115000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-115000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (252.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-115000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-115000 create -f testdata/busybox.yaml: exit status 1 (36.371824ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-115000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-115000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-115000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-115000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7",
	        "Created": "2023-01-24T18:28:39.694404231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:28:40.028695555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7-json.log",
	        "Name": "/old-k8s-version-115000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-115000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-115000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-115000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-115000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-115000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa575f37259f15aad30f1e98a1cee32e0ee3d18a75fba4a7e1f83ceb54a22cc0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55234"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55235"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55236"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55232"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55233"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aa575f37259f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-115000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b483b8467",
	                        "old-k8s-version-115000"
	                    ],
	                    "NetworkID": "5ad39a0309903e0f7f41a6f2aca4e1831033f2fa0e547dc51ca28338a4ce6eed",
	                    "EndpointID": "d1a1a9e88cb895f566dfbf35e730e63d180921fd3d8f1717d0eebc147c038084",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
E0124 10:32:44.370317    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 6 (415.668528ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:32:44.606641   27101 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-115000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-115000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-115000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-115000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7",
	        "Created": "2023-01-24T18:28:39.694404231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:28:40.028695555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7-json.log",
	        "Name": "/old-k8s-version-115000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-115000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-115000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-115000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-115000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-115000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa575f37259f15aad30f1e98a1cee32e0ee3d18a75fba4a7e1f83ceb54a22cc0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55234"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55235"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55236"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55232"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55233"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aa575f37259f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-115000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b483b8467",
	                        "old-k8s-version-115000"
	                    ],
	                    "NetworkID": "5ad39a0309903e0f7f41a6f2aca4e1831033f2fa0e547dc51ca28338a4ce6eed",
	                    "EndpointID": "d1a1a9e88cb895f566dfbf35e730e63d180921fd3d8f1717d0eebc147c038084",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 6 (418.01224ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:32:45.086053   27113 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-115000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-115000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-115000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0124 10:32:45.179019    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 10:32:46.930716    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:52.051628    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:53.698253    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:32:58.275071    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:32:58.539440    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:33:02.293907    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:33:04.248143    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:33:22.774915    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:33:26.310896    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
E0124 10:33:39.782035    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:39.787682    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:39.798932    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:39.820808    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:39.860942    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:39.941138    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:40.101396    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:40.423538    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:41.064088    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:42.345033    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:44.905694    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:50.025879    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:33:53.995044    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
E0124 10:34:00.266175    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:34:03.736255    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-115000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.206063522s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-115000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-115000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-115000 describe deploy/metrics-server -n kube-system: exit status 1 (35.19707ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-115000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-115000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-115000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-115000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7",
	        "Created": "2023-01-24T18:28:39.694404231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:28:40.028695555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7-json.log",
	        "Name": "/old-k8s-version-115000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-115000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-115000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-115000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-115000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-115000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa575f37259f15aad30f1e98a1cee32e0ee3d18a75fba4a7e1f83ceb54a22cc0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55234"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55235"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55236"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55232"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55233"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aa575f37259f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-115000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b483b8467",
	                        "old-k8s-version-115000"
	                    ],
	                    "NetworkID": "5ad39a0309903e0f7f41a6f2aca4e1831033f2fa0e547dc51ca28338a4ce6eed",
	                    "EndpointID": "d1a1a9e88cb895f566dfbf35e730e63d180921fd3d8f1717d0eebc147c038084",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 6 (406.019441ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:34:14.799595   27233 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-115000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-115000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (498.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-115000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0124 10:34:20.461443    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:34:20.746527    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:34:37.400670    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:35:01.708058    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:35:05.080770    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:35:14.428578    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:35:25.657154    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:35:42.116327    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:35:43.323014    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-115000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m13.614636092s)

                                                
                                                
-- stdout --
	* [old-k8s-version-115000] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-115000 in cluster old-k8s-version-115000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-115000" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 10:34:16.852240   27263 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:34:16.852413   27263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:34:16.852418   27263 out.go:309] Setting ErrFile to fd 2...
	I0124 10:34:16.852422   27263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:34:16.852536   27263 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:34:16.853019   27263 out.go:303] Setting JSON to false
	I0124 10:34:16.873041   27263 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5631,"bootTime":1674579625,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 10:34:16.873133   27263 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 10:34:16.895583   27263 out.go:177] * [old-k8s-version-115000] minikube v1.28.0 on Darwin 13.1
	I0124 10:34:16.937195   27263 notify.go:220] Checking for updates...
	I0124 10:34:16.937218   27263 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 10:34:16.958351   27263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:34:16.979527   27263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 10:34:17.001090   27263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 10:34:17.022398   27263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 10:34:17.044444   27263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 10:34:17.066468   27263 config.go:180] Loaded profile config "old-k8s-version-115000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0124 10:34:17.104360   27263 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0124 10:34:17.125548   27263 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 10:34:17.187073   27263 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 10:34:17.187198   27263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:34:17.326885   27263 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:34:17.236091919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:34:17.370392   27263 out.go:177] * Using the docker driver based on existing profile
	I0124 10:34:17.391392   27263 start.go:296] selected driver: docker
	I0124 10:34:17.391419   27263 start.go:840] validating driver "docker" against &{Name:old-k8s-version-115000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:34:17.391536   27263 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 10:34:17.395272   27263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:34:17.538524   27263 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:34:17.446552997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:34:17.538698   27263 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0124 10:34:17.538718   27263 cni.go:84] Creating CNI manager for ""
	I0124 10:34:17.538730   27263 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 10:34:17.538739   27263 start_flags.go:319] config:
	{Name:old-k8s-version-115000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:34:17.582279   27263 out.go:177] * Starting control plane node old-k8s-version-115000 in cluster old-k8s-version-115000
	I0124 10:34:17.603426   27263 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 10:34:17.625516   27263 out.go:177] * Pulling base image ...
	I0124 10:34:17.667324   27263 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:34:17.667327   27263 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 10:34:17.667416   27263 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0124 10:34:17.667436   27263 cache.go:57] Caching tarball of preloaded images
	I0124 10:34:17.668155   27263 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 10:34:17.668323   27263 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0124 10:34:17.668715   27263 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/config.json ...
	I0124 10:34:17.723477   27263 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 10:34:17.723496   27263 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 10:34:17.723516   27263 cache.go:193] Successfully downloaded all kic artifacts
	I0124 10:34:17.723579   27263 start.go:364] acquiring machines lock for old-k8s-version-115000: {Name:mk8bd7ad2f5bf8d8f939782c15c7e824af20d268 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 10:34:17.723668   27263 start.go:368] acquired machines lock for "old-k8s-version-115000" in 70.665µs
	I0124 10:34:17.723689   27263 start.go:96] Skipping create...Using existing machine configuration
	I0124 10:34:17.723699   27263 fix.go:55] fixHost starting: 
	I0124 10:34:17.723930   27263 cli_runner.go:164] Run: docker container inspect old-k8s-version-115000 --format={{.State.Status}}
	I0124 10:34:17.781016   27263 fix.go:103] recreateIfNeeded on old-k8s-version-115000: state=Stopped err=<nil>
	W0124 10:34:17.781048   27263 fix.go:129] unexpected machine state, will restart: <nil>
	I0124 10:34:17.803028   27263 out.go:177] * Restarting existing docker container for "old-k8s-version-115000" ...
	I0124 10:34:17.824810   27263 cli_runner.go:164] Run: docker start old-k8s-version-115000
	I0124 10:34:18.175033   27263 cli_runner.go:164] Run: docker container inspect old-k8s-version-115000 --format={{.State.Status}}
	I0124 10:34:18.239452   27263 kic.go:426] container "old-k8s-version-115000" state is running.
	I0124 10:34:18.240053   27263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-115000
	I0124 10:34:18.304468   27263 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/config.json ...
	I0124 10:34:18.304930   27263 machine.go:88] provisioning docker machine ...
	I0124 10:34:18.304956   27263 ubuntu.go:169] provisioning hostname "old-k8s-version-115000"
	I0124 10:34:18.305039   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:18.384151   27263 main.go:141] libmachine: Using SSH client type: native
	I0124 10:34:18.384352   27263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55502 <nil> <nil>}
	I0124 10:34:18.384362   27263 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-115000 && echo "old-k8s-version-115000" | sudo tee /etc/hostname
	I0124 10:34:18.529571   27263 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-115000
	
	I0124 10:34:18.529670   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:18.593257   27263 main.go:141] libmachine: Using SSH client type: native
	I0124 10:34:18.593418   27263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55502 <nil> <nil>}
	I0124 10:34:18.593433   27263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-115000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-115000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-115000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 10:34:18.729764   27263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:34:18.729784   27263 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 10:34:18.729816   27263 ubuntu.go:177] setting up certificates
	I0124 10:34:18.729826   27263 provision.go:83] configureAuth start
	I0124 10:34:18.729908   27263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-115000
	I0124 10:34:18.786380   27263 provision.go:138] copyHostCerts
	I0124 10:34:18.786488   27263 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 10:34:18.786501   27263 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 10:34:18.786608   27263 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 10:34:18.786829   27263 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 10:34:18.786835   27263 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 10:34:18.786896   27263 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 10:34:18.787051   27263 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 10:34:18.787057   27263 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 10:34:18.787115   27263 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 10:34:18.787239   27263 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-115000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-115000]
	I0124 10:34:18.954637   27263 provision.go:172] copyRemoteCerts
	I0124 10:34:18.954698   27263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 10:34:18.954779   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:19.012362   27263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55502 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:34:19.107115   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 10:34:19.125519   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0124 10:34:19.142835   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0124 10:34:19.160096   27263 provision.go:86] duration metric: configureAuth took 430.254513ms
	I0124 10:34:19.160109   27263 ubuntu.go:193] setting minikube options for container-runtime
	I0124 10:34:19.160281   27263 config.go:180] Loaded profile config "old-k8s-version-115000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0124 10:34:19.160348   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:19.219317   27263 main.go:141] libmachine: Using SSH client type: native
	I0124 10:34:19.219483   27263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55502 <nil> <nil>}
	I0124 10:34:19.219492   27263 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 10:34:19.355866   27263 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 10:34:19.355886   27263 ubuntu.go:71] root file system type: overlay
	I0124 10:34:19.356021   27263 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 10:34:19.356124   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:19.413477   27263 main.go:141] libmachine: Using SSH client type: native
	I0124 10:34:19.413638   27263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55502 <nil> <nil>}
	I0124 10:34:19.413690   27263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 10:34:19.557920   27263 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 10:34:19.558012   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:19.615691   27263 main.go:141] libmachine: Using SSH client type: native
	I0124 10:34:19.615870   27263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55502 <nil> <nil>}
	I0124 10:34:19.615882   27263 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 10:34:19.752376   27263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:34:19.752394   27263 machine.go:91] provisioned docker machine in 1.447444993s
	I0124 10:34:19.752403   27263 start.go:300] post-start starting for "old-k8s-version-115000" (driver="docker")
	I0124 10:34:19.752414   27263 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 10:34:19.752497   27263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 10:34:19.752552   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:19.810558   27263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55502 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:34:19.905858   27263 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 10:34:19.909500   27263 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 10:34:19.909515   27263 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 10:34:19.909522   27263 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 10:34:19.909526   27263 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 10:34:19.909536   27263 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 10:34:19.909637   27263 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 10:34:19.909802   27263 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 10:34:19.910001   27263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 10:34:19.917457   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:34:19.934700   27263 start.go:303] post-start completed in 182.284592ms
	I0124 10:34:19.934782   27263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 10:34:19.934836   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:19.993260   27263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55502 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:34:20.084093   27263 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 10:34:20.088882   27263 fix.go:57] fixHost completed within 2.365167405s
	I0124 10:34:20.088899   27263 start.go:83] releasing machines lock for "old-k8s-version-115000", held for 2.365208993s
	I0124 10:34:20.088982   27263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-115000
	I0124 10:34:20.148118   27263 ssh_runner.go:195] Run: cat /version.json
	I0124 10:34:20.148138   27263 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0124 10:34:20.148195   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:20.148234   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:20.211663   27263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55502 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:34:20.211808   27263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55502 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/old-k8s-version-115000/id_rsa Username:docker}
	I0124 10:34:20.303215   27263 ssh_runner.go:195] Run: systemctl --version
	I0124 10:34:20.507409   27263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0124 10:34:20.512377   27263 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0124 10:34:20.512434   27263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0124 10:34:20.520611   27263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0124 10:34:20.528297   27263 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0124 10:34:20.528314   27263 start.go:472] detecting cgroup driver to use...
	I0124 10:34:20.528330   27263 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:34:20.528418   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:34:20.541681   27263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0124 10:34:20.550090   27263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 10:34:20.559415   27263 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 10:34:20.559478   27263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 10:34:20.569676   27263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:34:20.578903   27263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 10:34:20.587890   27263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:34:20.596844   27263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 10:34:20.605635   27263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 10:34:20.614775   27263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 10:34:20.623328   27263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 10:34:20.631525   27263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:34:20.701689   27263 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 10:34:20.774528   27263 start.go:472] detecting cgroup driver to use...
	I0124 10:34:20.774548   27263 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:34:20.774618   27263 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 10:34:20.785701   27263 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 10:34:20.785771   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 10:34:20.796452   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:34:20.812192   27263 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 10:34:20.892401   27263 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 10:34:20.965082   27263 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 10:34:20.965137   27263 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 10:34:20.978945   27263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:34:21.057295   27263 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 10:34:21.314396   27263 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:34:21.347076   27263 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:34:21.423337   27263 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	I0124 10:34:21.423510   27263 cli_runner.go:164] Run: docker exec -t old-k8s-version-115000 dig +short host.docker.internal
	I0124 10:34:21.537258   27263 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:34:21.537368   27263 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:34:21.541970   27263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:34:21.551948   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:21.611326   27263 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 10:34:21.611401   27263 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:34:21.635474   27263 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0124 10:34:21.635490   27263 docker.go:560] Images already preloaded, skipping extraction
	I0124 10:34:21.635570   27263 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:34:21.660754   27263 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0124 10:34:21.660771   27263 cache_images.go:84] Images are preloaded, skipping loading
	I0124 10:34:21.660862   27263 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 10:34:21.735197   27263 cni.go:84] Creating CNI manager for ""
	I0124 10:34:21.735219   27263 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 10:34:21.735239   27263 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 10:34:21.735257   27263 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-115000 NodeName:old-k8s-version-115000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 10:34:21.735388   27263 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-115000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-115000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 10:34:21.735477   27263 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-115000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 10:34:21.735543   27263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0124 10:34:21.743570   27263 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 10:34:21.743635   27263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 10:34:21.751164   27263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0124 10:34:21.763980   27263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 10:34:21.777056   27263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0124 10:34:21.790585   27263 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0124 10:34:21.794525   27263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:34:21.804707   27263 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000 for IP: 192.168.76.2
	I0124 10:34:21.804727   27263 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:34:21.804883   27263 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 10:34:21.804939   27263 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 10:34:21.805042   27263 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/client.key
	I0124 10:34:21.805121   27263 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key.31bdca25
	I0124 10:34:21.805203   27263 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.key
	I0124 10:34:21.805415   27263 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 10:34:21.805453   27263 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 10:34:21.805463   27263 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 10:34:21.805500   27263 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 10:34:21.805533   27263 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 10:34:21.805563   27263 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 10:34:21.805634   27263 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:34:21.806217   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 10:34:21.823779   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0124 10:34:21.841728   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 10:34:21.859340   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/old-k8s-version-115000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0124 10:34:21.893266   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 10:34:21.911316   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 10:34:21.928738   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 10:34:21.946055   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 10:34:21.963369   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 10:34:21.980910   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 10:34:21.999228   27263 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 10:34:22.016737   27263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 10:34:22.029767   27263 ssh_runner.go:195] Run: openssl version
	I0124 10:34:22.035482   27263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 10:34:22.043773   27263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 10:34:22.048012   27263 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 10:34:22.048065   27263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 10:34:22.053700   27263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 10:34:22.061573   27263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 10:34:22.069655   27263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:34:22.073691   27263 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:34:22.073741   27263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:34:22.079319   27263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 10:34:22.087282   27263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 10:34:22.096341   27263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 10:34:22.100564   27263 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 10:34:22.100634   27263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 10:34:22.106243   27263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 10:34:22.114124   27263 kubeadm.go:401] StartCluster: {Name:old-k8s-version-115000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-115000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:34:22.114247   27263 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:34:22.138644   27263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 10:34:22.146824   27263 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0124 10:34:22.146839   27263 kubeadm.go:633] restartCluster start
	I0124 10:34:22.146897   27263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0124 10:34:22.154107   27263 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:22.154213   27263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-115000
	I0124 10:34:22.214413   27263 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-115000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:34:22.214575   27263 kubeconfig.go:146] "old-k8s-version-115000" context is missing from /Users/jenkins/minikube-integration/15565-3057/kubeconfig - will repair!
	I0124 10:34:22.214874   27263 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/kubeconfig: {Name:mk581b13c705409309a542f9aac4783c330d27c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:34:22.216121   27263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0124 10:34:22.223986   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:22.224036   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:22.232939   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:22.733917   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:22.734038   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:22.745096   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:23.233134   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:23.233249   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:23.243735   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:23.734239   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:23.734430   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:23.745488   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:24.235107   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:24.235337   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:24.246377   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:24.733047   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:24.733168   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:24.742688   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:25.234408   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:25.234511   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:25.245755   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:25.733312   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:25.733515   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:25.744574   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:26.233271   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:26.233453   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:26.244246   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:26.735181   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:26.735370   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:26.746446   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:27.233241   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:27.233373   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:27.243466   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:27.733170   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:27.733259   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:27.743220   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:28.233931   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:28.234143   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:28.245528   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:28.735230   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:28.735345   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:28.746443   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:29.233944   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:29.234063   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:29.245246   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:29.733331   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:29.733553   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:29.744527   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:30.233124   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:30.233303   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:30.243601   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:30.735281   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:30.735381   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:30.746571   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:31.233120   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:31.233303   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:31.243809   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:31.733320   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:31.733511   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:31.744569   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:32.235151   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:32.235373   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:32.246597   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:32.246608   27263 api_server.go:165] Checking apiserver status ...
	I0124 10:34:32.246666   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:34:32.255052   27263 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:34:32.255065   27263 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0124 10:34:32.255069   27263 kubeadm.go:1120] stopping kube-system containers ...
	I0124 10:34:32.255133   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:34:32.278144   27263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0124 10:34:32.289034   27263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:34:32.296955   27263 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jan 24 18:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Jan 24 18:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan 24 18:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan 24 18:30 /etc/kubernetes/scheduler.conf
	
	I0124 10:34:32.297013   27263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0124 10:34:32.304711   27263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0124 10:34:32.312383   27263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0124 10:34:32.319889   27263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0124 10:34:32.327498   27263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:34:32.334944   27263 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0124 10:34:32.334958   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:34:32.391992   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:34:32.938139   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:34:33.146304   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:34:33.232644   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:34:33.290118   27263 api_server.go:51] waiting for apiserver process to appear ...
	I0124 10:34:33.290187   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:33.799551   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:34.299344   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:34.801501   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:35.301437   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:35.799558   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:36.299648   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:36.799318   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:37.300448   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:37.800275   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:38.301134   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:38.800118   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:39.299612   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:39.800413   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:40.300840   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:40.801379   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:41.299622   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:41.801332   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:42.299761   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:42.800784   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:43.299694   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:43.799778   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:44.299830   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:44.801489   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:45.299496   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:45.799393   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:46.299649   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:46.799372   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:47.300203   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:47.799959   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:48.299891   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:48.801482   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:49.301448   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:49.800541   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:50.299997   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:50.799625   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:51.299528   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:51.799467   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:52.299456   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:52.799903   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:53.299532   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:53.799654   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:54.299551   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:54.801515   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:55.300022   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:55.800398   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:56.301274   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:56.801522   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:57.299449   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:57.799938   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:58.300819   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:58.801517   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:59.300558   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:34:59.799408   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:00.300593   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:00.801680   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:01.299731   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:01.800547   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:02.299462   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:02.801549   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:03.300003   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:03.801015   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:04.301403   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:04.801694   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:05.300056   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:05.799629   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:06.299908   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:06.801670   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:07.300985   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:07.800192   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:08.299737   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:08.799994   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:09.301622   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:09.800068   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:10.300116   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:10.801575   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:11.301449   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:11.799641   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:12.299766   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:12.801146   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:13.299570   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:13.800124   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:14.301770   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:14.801651   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:15.300879   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:15.799615   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:16.299550   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:16.799523   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:17.299543   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:17.799634   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:18.299781   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:18.799606   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:19.299592   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:19.800637   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:20.300637   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:20.799531   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:21.299574   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:21.799728   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:22.300628   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:22.799623   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:23.299646   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:23.799999   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:24.300168   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:24.800568   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:25.299645   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:25.800859   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:26.300290   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:26.800818   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:27.299628   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:27.799613   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:28.300122   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:28.801578   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:29.299878   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:29.799780   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:30.299864   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:30.799962   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:31.300105   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:31.799848   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:32.299964   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:32.799950   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:33.299767   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:35:33.324481   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.324497   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:35:33.324567   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:35:33.350270   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.350284   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:35:33.350364   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:35:33.378224   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.378237   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:35:33.378318   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:35:33.406604   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.406619   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:35:33.406730   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:35:33.449790   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.449804   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:35:33.449879   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:35:33.476617   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.476630   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:35:33.476702   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:35:33.500330   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.500344   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:35:33.500420   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:35:33.525030   27263 logs.go:279] 0 containers: []
	W0124 10:35:33.525043   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:35:33.525056   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:35:33.525063   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:35:33.564179   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:35:33.564193   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:35:33.576674   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:35:33.576687   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:35:33.632725   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:35:33.632739   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:35:33.632746   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:35:33.649030   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:35:33.649043   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:35:35.696963   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047895052s)
	I0124 10:35:38.197684   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:38.301797   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:35:38.326847   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.326861   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:35:38.326931   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:35:38.353686   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.353700   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:35:38.353768   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:35:38.380658   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.380680   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:35:38.380791   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:35:38.405985   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.406007   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:35:38.406080   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:35:38.429871   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.429903   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:35:38.430041   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:35:38.453575   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.453589   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:35:38.453672   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:35:38.477212   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.477225   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:35:38.477299   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:35:38.501091   27263 logs.go:279] 0 containers: []
	W0124 10:35:38.501106   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:35:38.501113   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:35:38.501120   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:35:40.550823   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049678246s)
	I0124 10:35:40.550932   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:35:40.550939   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:35:40.590557   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:35:40.590576   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:35:40.602985   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:35:40.603002   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:35:40.659890   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:35:40.659908   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:35:40.659914   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:35:43.175759   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:43.301890   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:35:43.330569   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.330585   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:35:43.330686   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:35:43.374052   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.374066   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:35:43.374141   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:35:43.398378   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.398393   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:35:43.398470   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:35:43.422112   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.422126   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:35:43.422204   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:35:43.452639   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.452662   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:35:43.452773   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:35:43.480636   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.480655   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:35:43.480728   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:35:43.505092   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.505106   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:35:43.505183   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:35:43.530712   27263 logs.go:279] 0 containers: []
	W0124 10:35:43.530730   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:35:43.530738   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:35:43.530750   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:35:43.581330   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:35:43.581353   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:35:43.594568   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:35:43.594583   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:35:43.666121   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:35:43.666141   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:35:43.666153   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:35:43.685415   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:35:43.685434   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:35:45.738365   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052905779s)
	I0124 10:35:48.238790   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:48.299875   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:35:48.339701   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.339726   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:35:48.339843   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:35:48.376769   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.376794   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:35:48.376912   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:35:48.412518   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.412532   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:35:48.412661   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:35:48.447725   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.447746   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:35:48.447865   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:35:48.497932   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.497950   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:35:48.498063   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:35:48.540021   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.540039   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:35:48.540131   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:35:48.573083   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.573102   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:35:48.573208   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:35:48.609865   27263 logs.go:279] 0 containers: []
	W0124 10:35:48.609903   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:35:48.609921   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:35:48.609940   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:35:48.685802   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:35:48.685822   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:35:48.710129   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:35:48.710146   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:35:48.783677   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:35:48.783697   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:35:48.783706   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:35:48.803191   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:35:48.803207   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:35:50.856921   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053684027s)
	I0124 10:35:53.357292   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:53.800838   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:35:53.826236   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.826251   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:35:53.826322   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:35:53.850052   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.850066   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:35:53.850135   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:35:53.873396   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.873410   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:35:53.873481   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:35:53.897175   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.897210   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:35:53.897314   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:35:53.921086   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.921099   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:35:53.921173   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:35:53.946318   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.946330   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:35:53.946390   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:35:53.969657   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.969670   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:35:53.969757   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:35:53.994798   27263 logs.go:279] 0 containers: []
	W0124 10:35:53.994812   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:35:53.994824   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:35:53.994834   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:35:54.034641   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:35:54.034655   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:35:54.047102   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:35:54.047119   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:35:54.104295   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:35:54.104309   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:35:54.104316   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:35:54.120122   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:35:54.120137   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:35:56.168876   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048713719s)
	I0124 10:35:58.671234   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:35:58.800702   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:35:58.827458   27263 logs.go:279] 0 containers: []
	W0124 10:35:58.827472   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:35:58.827546   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:35:58.858956   27263 logs.go:279] 0 containers: []
	W0124 10:35:58.858973   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:35:58.859059   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:35:58.913822   27263 logs.go:279] 0 containers: []
	W0124 10:35:58.913838   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:35:58.913913   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:35:58.971095   27263 logs.go:279] 0 containers: []
	W0124 10:35:58.971113   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:35:58.971196   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:35:59.015429   27263 logs.go:279] 0 containers: []
	W0124 10:35:59.015452   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:35:59.015557   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:35:59.051018   27263 logs.go:279] 0 containers: []
	W0124 10:35:59.051037   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:35:59.051132   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:35:59.092421   27263 logs.go:279] 0 containers: []
	W0124 10:35:59.092438   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:35:59.092527   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:35:59.127424   27263 logs.go:279] 0 containers: []
	W0124 10:35:59.127440   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:35:59.127450   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:35:59.127470   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:35:59.148797   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:35:59.148822   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:35:59.234684   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:35:59.234698   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:35:59.234708   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:35:59.266041   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:35:59.266069   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:01.325051   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058956501s)
	I0124 10:36:01.325165   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:01.325174   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:03.866123   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:04.299835   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:04.326584   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.326596   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:04.326671   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:04.355260   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.355289   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:04.355415   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:04.380866   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.380884   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:04.380945   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:04.408169   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.408181   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:04.408279   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:04.435357   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.435370   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:04.435443   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:04.462482   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.462496   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:04.462557   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:04.490361   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.490376   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:04.490433   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:04.518236   27263 logs.go:279] 0 containers: []
	W0124 10:36:04.518248   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:04.518255   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:04.518262   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:04.559749   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:04.559769   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:04.573583   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:04.573601   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:04.634230   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:04.634260   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:04.634269   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:04.650334   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:04.650347   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:06.700448   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050075382s)
	I0124 10:36:09.201217   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:09.300181   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:09.325572   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.325606   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:09.325727   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:09.350957   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.350971   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:09.351043   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:09.376159   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.376173   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:09.376248   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:09.402248   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.402265   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:09.402339   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:09.450267   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.450280   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:09.450358   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:09.475618   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.475638   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:09.475711   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:09.500216   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.500230   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:09.500306   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:09.525268   27263 logs.go:279] 0 containers: []
	W0124 10:36:09.525281   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:09.525309   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:09.525316   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:09.537819   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:09.537839   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:09.596421   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:09.596444   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:09.596453   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:09.615506   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:09.615522   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:11.667765   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052217032s)
	I0124 10:36:11.667897   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:11.667907   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:14.213971   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:14.299914   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:14.328782   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.328798   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:14.328920   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:14.359270   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.359286   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:14.359381   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:14.390031   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.390048   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:14.390135   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:14.416625   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.416641   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:14.416718   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:14.445229   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.445245   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:14.445330   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:14.477040   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.477053   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:14.477137   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:14.504565   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.504581   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:14.504662   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:14.534452   27263 logs.go:279] 0 containers: []
	W0124 10:36:14.534469   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:14.534477   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:14.534485   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:14.554259   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:14.554284   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:16.618597   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064282615s)
	I0124 10:36:16.618740   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:16.618754   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:16.672190   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:16.672211   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:16.689634   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:16.689660   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:16.764207   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:19.264403   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:19.300342   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:19.332501   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.332522   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:19.332627   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:19.365337   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.365354   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:19.365470   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:19.398205   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.398227   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:19.398323   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:19.431749   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.431765   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:19.431845   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:19.464795   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.464816   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:19.464920   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:19.494378   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.494404   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:19.494501   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:19.529265   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.529280   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:19.529365   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:19.557896   27263 logs.go:279] 0 containers: []
	W0124 10:36:19.557910   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:19.557917   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:19.557924   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:21.627592   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.069638568s)
	I0124 10:36:21.627788   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:21.627805   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:21.680467   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:21.680488   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:21.697056   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:21.697074   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:21.765559   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:21.765571   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:21.765585   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:24.285447   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:24.300160   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:24.324472   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.324488   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:24.324564   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:24.351611   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.351623   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:24.351736   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:24.377209   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.377224   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:24.377295   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:24.403808   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.403821   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:24.403905   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:24.430771   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.430785   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:24.430853   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:24.457283   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.457294   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:24.457351   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:24.482135   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.482149   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:24.482220   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:24.505624   27263 logs.go:279] 0 containers: []
	W0124 10:36:24.505638   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:24.505644   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:24.505653   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:24.547529   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:24.547545   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:24.560710   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:24.560724   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:24.626008   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:24.626040   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:24.626057   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:24.646262   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:24.646284   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:26.704168   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057857808s)
	I0124 10:36:29.204480   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:29.300767   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:29.331891   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.331916   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:29.332010   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:29.365619   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.365632   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:29.365706   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:29.396891   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.396905   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:29.396990   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:29.426807   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.426821   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:29.426896   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:29.455189   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.455206   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:29.455290   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:29.486703   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.486718   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:29.486811   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:29.519630   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.519664   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:29.519754   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:29.550992   27263 logs.go:279] 0 containers: []
	W0124 10:36:29.551008   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:29.551015   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:29.551022   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:29.600825   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:29.600850   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:29.617686   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:29.617706   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:29.689554   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:29.689569   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:29.689578   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:29.708453   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:29.708470   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:31.773818   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065318481s)
	I0124 10:36:34.274201   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:34.300102   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:34.327545   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.327560   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:34.327638   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:34.352069   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.352088   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:34.352179   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:34.387680   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.387698   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:34.387797   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:34.432597   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.432612   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:34.432686   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:34.460664   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.460678   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:34.460758   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:34.485636   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.485649   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:34.485765   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:34.511484   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.511498   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:34.511573   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:34.536819   27263 logs.go:279] 0 containers: []
	W0124 10:36:34.536834   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:34.536841   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:34.536849   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:34.553635   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:34.553650   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:36.606571   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052894374s)
	I0124 10:36:36.606682   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:36.606691   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:36.647744   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:36.647761   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:36.660507   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:36.660522   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:36.720829   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:39.221073   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:39.300150   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:39.335966   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.335992   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:39.336086   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:39.375286   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.375322   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:39.375439   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:39.406295   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.406309   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:39.406378   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:39.437783   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.437798   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:39.437882   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:39.469418   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.469427   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:39.469485   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:39.499848   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.499861   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:39.499933   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:39.530063   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.530087   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:39.530227   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:39.560873   27263 logs.go:279] 0 containers: []
	W0124 10:36:39.560892   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:39.560922   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:39.560931   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:39.613831   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:39.613871   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:39.628351   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:39.628367   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:39.700862   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:39.700873   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:39.700882   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:39.723575   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:39.723591   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:41.778173   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054556757s)
	I0124 10:36:44.280293   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:44.300324   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:44.323067   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.323081   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:44.323150   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:44.346630   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.346659   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:44.346747   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:44.376680   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.376729   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:44.376856   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:44.404659   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.404676   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:44.404795   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:44.437762   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.437783   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:44.437910   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:44.464532   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.464548   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:44.464616   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:44.489080   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.489098   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:44.489180   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:44.511913   27263 logs.go:279] 0 containers: []
	W0124 10:36:44.511927   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:44.511934   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:44.511941   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:44.555041   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:44.555061   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:44.568603   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:44.568619   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:44.633269   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:44.633281   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:44.633290   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:44.651517   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:44.651533   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:46.703975   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052416099s)
	I0124 10:36:49.204309   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:49.300151   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:49.324445   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.324458   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:49.324528   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:49.347649   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.347663   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:49.347736   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:49.372063   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.372077   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:49.372145   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:49.395823   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.395837   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:49.395924   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:49.419128   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.419142   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:49.419250   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:49.443449   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.443464   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:49.443534   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:49.467642   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.467656   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:49.467727   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:49.493675   27263 logs.go:279] 0 containers: []
	W0124 10:36:49.493689   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:49.493696   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:49.493703   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:49.551092   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:49.551105   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:49.551112   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:49.567709   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:49.567724   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:51.619618   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051869063s)
	I0124 10:36:51.619729   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:51.619735   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:51.657996   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:51.658016   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:54.172183   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:54.301926   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:54.326518   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.326531   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:54.326599   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:54.357208   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.357222   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:54.357302   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:54.391772   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.391787   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:54.391862   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:54.414680   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.414694   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:54.414794   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:54.440056   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.440072   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:54.440157   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:54.470359   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.470374   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:54.470450   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:54.495278   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.495290   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:54.495363   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:54.520129   27263 logs.go:279] 0 containers: []
	W0124 10:36:54.520144   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:54.520151   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:36:54.520158   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:36:54.563108   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:36:54.563128   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:36:54.577933   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:54.577955   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:54.637249   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:54.637264   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:54.637273   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:54.659531   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:54.659568   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:36:56.714135   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0545396s)
	I0124 10:36:59.216476   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:36:59.300356   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:36:59.327173   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.327188   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:36:59.327257   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:36:59.366819   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.366840   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:36:59.366958   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:36:59.404103   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.404119   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:36:59.404221   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:36:59.432679   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.432713   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:36:59.432809   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:36:59.458711   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.458726   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:36:59.458796   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:36:59.482079   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.482095   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:36:59.482199   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:36:59.506072   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.506089   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:36:59.506166   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:36:59.529938   27263 logs.go:279] 0 containers: []
	W0124 10:36:59.529954   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:36:59.529962   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:36:59.529970   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:36:59.590558   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:36:59.590571   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:36:59.590578   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:36:59.607909   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:36:59.607923   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:01.658453   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050504126s)
	I0124 10:37:01.658567   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:01.658575   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:01.699500   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:01.699517   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:04.213282   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:04.303332   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:04.328482   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.328497   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:04.328565   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:04.351964   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.351980   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:04.352053   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:04.377489   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.377501   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:04.377559   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:04.401055   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.401072   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:04.401144   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:04.424295   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.424312   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:04.424395   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:04.447681   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.447696   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:04.447765   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:04.473454   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.473468   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:04.473545   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:04.498046   27263 logs.go:279] 0 containers: []
	W0124 10:37:04.498060   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:04.498068   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:04.498075   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:04.539906   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:04.539922   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:04.553162   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:04.553177   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:04.608148   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:04.608161   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:04.608170   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:04.625189   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:04.625205   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:06.681589   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052638431s)
	I0124 10:37:09.185788   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:09.310400   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:09.336202   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.336216   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:09.336303   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:09.358566   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.358580   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:09.358653   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:09.381772   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.381786   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:09.381855   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:09.406429   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.406442   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:09.406509   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:09.429490   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.429504   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:09.429571   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:09.453508   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.453522   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:09.453592   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:09.478396   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.478409   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:09.478481   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:09.503662   27263 logs.go:279] 0 containers: []
	W0124 10:37:09.503676   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:09.503684   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:09.503691   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:09.520753   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:09.520768   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:11.588040   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064514698s)
	I0124 10:37:11.588224   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:11.588236   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:11.639976   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:11.639991   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:11.656590   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:11.656607   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:11.735482   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:14.240124   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:14.318006   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:14.343398   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.343413   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:14.343480   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:14.368268   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.368283   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:14.368354   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:14.394687   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.394701   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:14.394781   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:14.445644   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.445666   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:14.445776   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:14.470198   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.470211   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:14.470280   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:14.499475   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.499491   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:14.499579   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:14.525749   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.525762   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:14.525829   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:14.548898   27263 logs.go:279] 0 containers: []
	W0124 10:37:14.548914   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:14.548921   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:14.548928   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:14.565316   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:14.565330   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:16.620917   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053595928s)
	I0124 10:37:16.621042   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:16.621053   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:16.660886   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:16.660904   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:16.674216   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:16.674239   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:16.731603   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:19.234332   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:19.320685   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:19.346109   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.346123   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:19.346199   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:19.368863   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.368876   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:19.368948   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:19.391982   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.391995   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:19.392061   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:19.416373   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.416388   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:19.416458   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:19.441161   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.441176   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:19.441266   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:19.464668   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.464682   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:19.464759   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:19.489210   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.489224   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:19.489304   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:19.512374   27263 logs.go:279] 0 containers: []
	W0124 10:37:19.512391   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:19.512399   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:19.512405   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:21.560184   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046332372s)
	I0124 10:37:21.560291   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:21.560298   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:21.599420   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:21.599437   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:21.611904   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:21.611921   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:21.669140   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:21.669151   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:21.669158   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:24.187327   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:24.324222   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:24.349847   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.349861   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:24.349930   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:24.373554   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.373567   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:24.373635   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:24.397189   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.397203   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:24.397272   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:24.420916   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.420930   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:24.420996   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:24.444221   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.444236   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:24.444309   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:24.469002   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.469036   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:24.469106   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:24.493375   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.493389   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:24.493459   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:24.518043   27263 logs.go:279] 0 containers: []
	W0124 10:37:24.518072   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:24.518080   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:24.518086   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:24.556952   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:24.556968   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:24.569054   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:24.569071   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:24.625163   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:24.625176   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:24.625184   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:24.641189   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:24.641202   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:26.692154   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049882506s)
	I0124 10:37:29.193536   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:29.326033   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:29.350014   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.350029   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:29.350098   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:29.373626   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.373640   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:29.373728   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:29.399529   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.399542   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:29.399610   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:29.424606   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.424621   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:29.424699   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:29.473334   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.473348   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:29.473416   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:29.497960   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.497974   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:29.498042   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:29.522397   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.522426   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:29.522495   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:29.545421   27263 logs.go:279] 0 containers: []
	W0124 10:37:29.545434   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:29.545441   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:29.545449   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:29.585331   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:29.585345   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:29.598254   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:29.598269   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:29.656708   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:29.656721   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:29.656728   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:29.673210   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:29.673224   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:31.724597   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050607141s)
	I0124 10:37:34.226370   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:34.327606   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:34.353693   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.353707   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:34.353777   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:34.381159   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.381172   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:34.381241   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:34.407759   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.407773   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:34.407859   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:34.436700   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.436732   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:34.436874   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:34.462996   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.463012   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:34.463089   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:34.487784   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.487799   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:34.487873   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:34.515995   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.516013   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:34.516100   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:34.547184   27263 logs.go:279] 0 containers: []
	W0124 10:37:34.547199   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:34.547207   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:34.547220   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:34.562220   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:34.562238   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:34.632226   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:34.632238   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:34.632245   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:34.650368   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:34.650402   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:36.712024   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06105529s)
	I0124 10:37:36.712175   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:36.712186   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:39.262295   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:39.329578   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:39.355848   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.355861   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:39.355928   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:39.379616   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.379628   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:39.379694   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:39.403278   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.403291   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:39.403358   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:39.427665   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.427679   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:39.427758   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:39.451476   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.451489   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:39.451560   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:39.474384   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.474397   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:39.474463   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:39.499026   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.499039   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:39.499108   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:39.522905   27263 logs.go:279] 0 containers: []
	W0124 10:37:39.522918   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:39.522925   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:39.522932   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:39.561396   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:39.561409   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:39.574554   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:39.574571   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:39.632005   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:39.632016   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:39.632022   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:39.648782   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:39.648796   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:41.701703   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052493702s)
	I0124 10:37:44.202396   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:44.331144   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:44.357057   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.357071   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:44.357139   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:44.380881   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.380896   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:44.380969   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:44.406888   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.406902   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:44.406971   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:44.461959   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.461974   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:44.462038   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:44.485167   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.485182   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:44.485250   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:44.510226   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.510239   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:44.510340   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:44.533891   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.533906   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:44.533977   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:44.557400   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.557415   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:44.557424   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:44.557431   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:44.596718   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:44.596732   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:44.609225   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:44.609242   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:44.665845   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:44.665856   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:44.665863   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:44.681616   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:44.681629   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:46.732928   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050989323s)
	I0124 10:37:49.234335   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:49.330644   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:49.354692   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.354705   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:49.354775   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:49.378346   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.378359   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:49.378425   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:49.405532   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.405546   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:49.405620   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:49.431408   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.431422   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:49.431488   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:49.459304   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.459320   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:49.459374   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:49.489169   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.489182   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:49.489274   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:49.519973   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.519990   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:49.520080   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:49.550475   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.550487   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:49.550494   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:49.550503   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:49.591869   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:49.591885   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:49.604324   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:49.604339   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:49.666052   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:49.666067   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:49.666076   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:49.685136   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:49.685170   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:51.739579   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054178782s)
	I0124 10:37:54.242100   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:54.331877   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:54.358124   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.358138   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:54.358206   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:54.381880   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.381893   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:54.381962   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:54.406255   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.406270   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:54.406355   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:54.430379   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.430392   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:54.430462   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:54.453277   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.453291   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:54.453361   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:54.477506   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.477519   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:54.477588   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:54.500280   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.500295   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:54.500364   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:54.523649   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.523664   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:54.523671   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:54.523677   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:54.564398   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:54.564411   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:54.576590   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:54.576603   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:54.632595   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:54.632635   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:54.632643   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:54.648988   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:54.649000   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:56.700531   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0513572s)
	I0124 10:37:59.203014   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:59.332298   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:59.356942   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.356957   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:59.357031   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:59.380149   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.380163   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:59.380241   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:59.404417   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.404430   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:59.404510   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:59.429602   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.429618   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:59.429688   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:59.476195   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.476209   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:59.476278   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:59.501120   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.501137   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:59.501231   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:59.525455   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.525469   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:59.525537   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:59.550487   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.550499   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:59.550506   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:59.550513   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:59.566119   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:59.566131   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:01.618086   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051821613s)
	I0124 10:38:01.618192   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:01.618199   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:01.656904   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:01.656921   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:01.670866   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:01.670881   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:01.729369   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:04.229839   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:04.332165   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:04.358102   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.358115   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:04.358184   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:04.385371   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.385387   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:04.385474   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:04.412963   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.412981   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:04.413067   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:04.437616   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.437628   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:04.437698   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:04.462071   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.462086   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:04.462164   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:04.487734   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.487764   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:04.487847   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:04.517217   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.517236   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:04.517314   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:04.541979   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.541992   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:04.542000   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:04.542006   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:04.558033   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:04.558047   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:06.614081   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055928178s)
	I0124 10:38:06.614203   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:06.614212   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:06.653837   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:06.653851   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:06.667066   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:06.667085   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:06.725130   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:09.225927   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:09.331764   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:09.356116   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.356130   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:09.356199   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:09.380987   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.381003   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:09.381084   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:09.406172   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.406186   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:09.406261   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:09.432250   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.432269   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:09.432397   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:09.457997   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.458010   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:09.458079   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:09.484721   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.484734   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:09.484810   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:09.513208   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.513222   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:09.513303   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:09.539436   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.539449   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:09.539457   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:09.539464   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:09.582963   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:09.582989   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:09.598615   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:09.598636   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:09.659498   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:09.659511   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:09.659518   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:09.680325   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:09.680343   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:11.732480   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052056553s)
	I0124 10:38:14.234398   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:14.333390   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:14.359210   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.359225   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:14.359293   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:14.384535   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.384550   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:14.384636   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:14.410296   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.410310   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:14.410390   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:14.464844   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.464860   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:14.464952   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:14.491148   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.491166   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:14.491250   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:14.517076   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.517090   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:14.517160   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:14.541340   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.541353   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:14.541420   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:14.565448   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.565462   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:14.565468   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:14.565476   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:14.606381   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:14.606399   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:14.619569   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:14.619620   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:14.677455   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:14.677468   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:14.677475   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:14.693797   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:14.693812   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:16.742142   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048264861s)
	I0124 10:38:19.243211   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:19.332259   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:19.357246   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.357259   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:19.357327   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:19.381046   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.381060   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:19.381131   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:19.404015   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.404029   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:19.404097   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:19.427316   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.427331   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:19.427402   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:19.450249   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.450263   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:19.450347   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:19.473571   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.473584   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:19.473657   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:19.497296   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.497310   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:19.497387   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:19.520416   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.520430   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:19.520437   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:19.520458   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:19.561553   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:19.561568   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:19.575044   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:19.575061   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:19.633197   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:19.633211   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:19.633217   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:19.651602   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:19.651618   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:21.700964   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049287915s)
	I0124 10:38:24.201310   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:24.333111   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:24.369141   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.369164   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:24.369245   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:24.395205   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.395220   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:24.395303   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:24.418623   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.418638   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:24.418707   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:24.442955   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.442971   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:24.443050   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:24.468455   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.468468   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:24.468542   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:24.493839   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.493854   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:24.493931   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:24.517513   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.517527   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:24.517648   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:24.542201   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.542214   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:24.542221   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:24.542228   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:24.585211   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:24.585227   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:24.597475   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:24.597488   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:24.653771   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:24.653809   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:24.653817   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:24.670336   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:24.670354   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:26.719860   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049459327s)
	I0124 10:38:29.222256   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:29.333338   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:29.358758   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.358773   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:29.358841   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:29.383138   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.383151   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:29.383219   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:29.409311   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.409329   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:29.409405   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:29.434963   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.434977   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:29.435046   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:29.478172   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.478185   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:29.478253   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:29.503290   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.503303   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:29.503374   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:29.527505   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.527555   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:29.527627   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:29.551837   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.551851   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:29.551858   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:29.551866   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:29.590937   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:29.590952   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:29.603043   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:29.603056   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:29.659792   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:29.659803   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:29.659814   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:29.675399   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:29.675413   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:31.723686   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048231613s)
	I0124 10:38:34.224839   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:34.235807   27263 kubeadm.go:637] restartCluster took 4m12.05588921s
	W0124 10:38:34.235897   27263 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0124 10:38:34.235912   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0124 10:38:34.649714   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:38:34.659537   27263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:38:34.667363   27263 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:38:34.667417   27263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:38:34.675161   27263 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:38:34.675191   27263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:38:34.725606   27263 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:38:34.726125   27263 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:38:35.037119   27263 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:38:35.037235   27263 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:38:35.037355   27263 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:38:35.270689   27263 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:38:35.271623   27263 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:38:35.278471   27263 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:38:35.354320   27263 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:38:35.375966   27263 out.go:204]   - Generating certificates and keys ...
	I0124 10:38:35.376042   27263 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:38:35.376122   27263 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:38:35.376188   27263 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 10:38:35.376268   27263 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 10:38:35.376354   27263 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 10:38:35.376409   27263 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 10:38:35.376466   27263 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 10:38:35.376541   27263 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 10:38:35.376592   27263 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 10:38:35.376639   27263 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 10:38:35.376665   27263 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 10:38:35.376701   27263 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:38:35.482769   27263 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:38:35.599001   27263 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:38:35.877612   27263 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:38:35.972431   27263 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:38:35.973146   27263 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:38:35.994635   27263 out.go:204]   - Booting up control plane ...
	I0124 10:38:35.994755   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:38:35.994844   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:38:35.994933   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:38:35.995044   27263 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:38:35.995208   27263 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:39:15.982797   27263 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:39:15.983145   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:15.983299   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:39:20.984460   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:20.984655   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:39:30.986119   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:30.986278   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:39:50.987331   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:50.987478   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:40:30.989601   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:40:30.989865   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:40:30.989880   27263 kubeadm.go:322] 
	I0124 10:40:30.989918   27263 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:40:30.989980   27263 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:40:30.990003   27263 kubeadm.go:322] 
	I0124 10:40:30.990073   27263 kubeadm.go:322] This error is likely caused by:
	I0124 10:40:30.990115   27263 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:40:30.990231   27263 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:40:30.990239   27263 kubeadm.go:322] 
	I0124 10:40:30.990386   27263 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:40:30.990452   27263 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:40:30.990534   27263 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:40:30.990563   27263 kubeadm.go:322] 
	I0124 10:40:30.990673   27263 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:40:30.990796   27263 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:40:30.990862   27263 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:40:30.990912   27263 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:40:30.990985   27263 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:40:30.991014   27263 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:40:30.993758   27263 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:40:30.993824   27263 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:40:30.993962   27263 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:40:30.994055   27263 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:40:30.994153   27263 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:40:30.994240   27263 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0124 10:40:30.994390   27263 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0124 10:40:30.994417   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0124 10:40:31.409898   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:40:31.419728   27263 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:40:31.419783   27263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:40:31.427528   27263 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:40:31.427548   27263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:40:31.476171   27263 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:40:31.476224   27263 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:40:31.783986   27263 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:40:31.784084   27263 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:40:31.784177   27263 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:40:32.013779   27263 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:40:32.014538   27263 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:40:32.021443   27263 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:40:32.097023   27263 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:40:32.118655   27263 out.go:204]   - Generating certificates and keys ...
	I0124 10:40:32.118789   27263 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:40:32.118868   27263 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:40:32.118939   27263 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 10:40:32.119036   27263 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 10:40:32.119124   27263 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 10:40:32.119203   27263 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 10:40:32.119272   27263 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 10:40:32.119313   27263 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 10:40:32.119366   27263 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 10:40:32.119412   27263 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 10:40:32.119441   27263 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 10:40:32.119499   27263 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:40:32.155667   27263 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:40:32.409107   27263 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:40:32.615328   27263 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:40:32.678101   27263 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:40:32.678779   27263 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:40:32.700218   27263 out.go:204]   - Booting up control plane ...
	I0124 10:40:32.700289   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:40:32.700356   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:40:32.700420   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:40:32.700480   27263 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:40:32.700600   27263 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:41:12.689848   27263 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:41:12.690483   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:12.690617   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:41:17.691224   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:17.691386   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:41:27.692167   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:27.692348   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:41:47.693321   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:47.693485   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:42:27.694732   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:42:27.694874   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:42:27.694886   27263 kubeadm.go:322] 
	I0124 10:42:27.694915   27263 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:42:27.694950   27263 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:42:27.694956   27263 kubeadm.go:322] 
	I0124 10:42:27.694988   27263 kubeadm.go:322] This error is likely caused by:
	I0124 10:42:27.695013   27263 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:42:27.695130   27263 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:42:27.695141   27263 kubeadm.go:322] 
	I0124 10:42:27.695219   27263 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:42:27.695259   27263 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:42:27.695293   27263 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:42:27.695300   27263 kubeadm.go:322] 
	I0124 10:42:27.695508   27263 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:42:27.695579   27263 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:42:27.695661   27263 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:42:27.695709   27263 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:42:27.695799   27263 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:42:27.695826   27263 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:42:27.698953   27263 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:42:27.699027   27263 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:42:27.699185   27263 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:42:27.699283   27263 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:42:27.699352   27263 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:42:27.699417   27263 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0124 10:42:27.699437   27263 kubeadm.go:403] StartCluster complete in 8m5.550644532s
	I0124 10:42:27.699525   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:42:27.725130   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.725144   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:42:27.725281   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:42:27.749346   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.749360   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:42:27.749433   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:42:27.774887   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.774903   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:42:27.774977   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:42:27.818348   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.818364   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:42:27.818452   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:42:27.842586   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.842601   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:42:27.842678   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:42:27.873567   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.873585   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:42:27.873662   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:42:27.904877   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.904892   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:42:27.904968   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:42:27.931921   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.931935   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:42:27.931942   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:42:27.931949   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:42:27.947814   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:42:27.947828   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:42:30.001529   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053676358s)
	I0124 10:42:30.001677   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:42:30.001687   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:42:30.043303   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:42:30.043320   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:42:30.055895   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:42:30.055911   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:42:30.114077   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0124 10:42:30.114093   27263 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0124 10:42:30.114113   27263 out.go:239] * 
	* 
	W0124 10:42:30.114240   27263 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:42:30.114277   27263 out.go:239] * 
	* 
	W0124 10:42:30.114974   27263 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 10:42:30.196610   27263 out.go:177] 
	W0124 10:42:30.270828   27263 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:42:30.270952   27263 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0124 10:42:30.271005   27263 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0124 10:42:30.345504   27263 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-115000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-115000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-115000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7",
	        "Created": "2023-01-24T18:28:39.694404231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313083,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:34:18.167062744Z",
	            "FinishedAt": "2023-01-24T18:34:15.257971971Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7-json.log",
	        "Name": "/old-k8s-version-115000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-115000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-115000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-115000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-115000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-115000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aeae45eac0d9801aed631b6f91823fc2a72eaba680eac64041de99fb28e72c64",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55502"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55498"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55501"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aeae45eac0d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-115000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b483b8467",
	                        "old-k8s-version-115000"
	                    ],
	                    "NetworkID": "5ad39a0309903e0f7f41a6f2aca4e1831033f2fa0e547dc51ca28338a4ce6eed",
	                    "EndpointID": "8a3908d14d7d1b555adf982499222c517cb6fc8a004ccb9ffc793e4d2e71600d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (608.826668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-115000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-115000 logs -n 25: (3.74833848s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-129000 sudo                              | false-129000           | jenkins | v1.28.0 | 24 Jan 23 10:29 PST | 24 Jan 23 10:29 PST |
	|         | containerd config dump                            |                        |         |         |                     |                     |
	| ssh     | -p false-129000 sudo systemctl                    | false-129000           | jenkins | v1.28.0 | 24 Jan 23 10:29 PST |                     |
	|         | status crio --all --full                          |                        |         |         |                     |                     |
	|         | --no-pager                                        |                        |         |         |                     |                     |
	| ssh     | -p false-129000 sudo systemctl                    | false-129000           | jenkins | v1.28.0 | 24 Jan 23 10:29 PST | 24 Jan 23 10:29 PST |
	|         | cat crio --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p false-129000 sudo find                         | false-129000           | jenkins | v1.28.0 | 24 Jan 23 10:29 PST | 24 Jan 23 10:29 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p false-129000 sudo crio                         | false-129000           | jenkins | v1.28.0 | 24 Jan 23 10:29 PST | 24 Jan 23 10:29 PST |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p false-129000                                   | false-129000           | jenkins | v1.28.0 | 24 Jan 23 10:29 PST | 24 Jan 23 10:29 PST |
	| start   | -p no-preload-307000                              | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:29 PST | 24 Jan 23 10:30 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-307000        | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:30 PST | 24 Jan 23 10:30 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-307000                              | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:30 PST | 24 Jan 23 10:30 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-307000             | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:30 PST | 24 Jan 23 10:30 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-307000                              | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:30 PST | 24 Jan 23 10:35 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-115000   | old-k8s-version-115000 | jenkins | v1.28.0 | 24 Jan 23 10:32 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-115000                         | old-k8s-version-115000 | jenkins | v1.28.0 | 24 Jan 23 10:34 PST | 24 Jan 23 10:34 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-115000        | old-k8s-version-115000 | jenkins | v1.28.0 | 24 Jan 23 10:34 PST | 24 Jan 23 10:34 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-115000                         | old-k8s-version-115000 | jenkins | v1.28.0 | 24 Jan 23 10:34 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-307000 sudo                         | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:36 PST | 24 Jan 23 10:36 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-307000                              | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:36 PST | 24 Jan 23 10:36 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-307000                              | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:36 PST | 24 Jan 23 10:36 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-307000                              | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:36 PST | 24 Jan 23 10:36 PST |
	| delete  | -p no-preload-307000                              | no-preload-307000      | jenkins | v1.28.0 | 24 Jan 23 10:36 PST | 24 Jan 23 10:36 PST |
	| start   | -p embed-certs-777000                             | embed-certs-777000     | jenkins | v1.28.0 | 24 Jan 23 10:36 PST | 24 Jan 23 10:37 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777000       | embed-certs-777000     | jenkins | v1.28.0 | 24 Jan 23 10:37 PST | 24 Jan 23 10:37 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-777000                             | embed-certs-777000     | jenkins | v1.28.0 | 24 Jan 23 10:37 PST | 24 Jan 23 10:37 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777000            | embed-certs-777000     | jenkins | v1.28.0 | 24 Jan 23 10:37 PST | 24 Jan 23 10:37 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-777000                             | embed-certs-777000     | jenkins | v1.28.0 | 24 Jan 23 10:37 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 10:37:46
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 10:37:46.212524   27980 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:37:46.212691   27980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:37:46.212696   27980 out.go:309] Setting ErrFile to fd 2...
	I0124 10:37:46.212700   27980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:37:46.212820   27980 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:37:46.213289   27980 out.go:303] Setting JSON to false
	I0124 10:37:46.231412   27980 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5841,"bootTime":1674579625,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 10:37:46.231488   27980 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 10:37:46.254112   27980 out.go:177] * [embed-certs-777000] minikube v1.28.0 on Darwin 13.1
	I0124 10:37:46.296595   27980 notify.go:220] Checking for updates...
	I0124 10:37:46.318624   27980 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 10:37:46.341819   27980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:37:46.363443   27980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 10:37:46.384881   27980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 10:37:46.406855   27980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 10:37:46.428715   27980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 10:37:46.450306   27980 config.go:180] Loaded profile config "embed-certs-777000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:37:46.450982   27980 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 10:37:46.512531   27980 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 10:37:46.512674   27980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:37:46.656891   27980 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:37:46.555431554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:37:46.699278   27980 out.go:177] * Using the docker driver based on existing profile
	I0124 10:37:46.720518   27980 start.go:296] selected driver: docker
	I0124 10:37:46.720536   27980 start.go:840] validating driver "docker" against &{Name:embed-certs-777000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-777000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:37:46.720599   27980 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 10:37:46.723336   27980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:37:46.867943   27980 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:37:46.768473284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:37:46.868134   27980 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0124 10:37:46.868154   27980 cni.go:84] Creating CNI manager for ""
	I0124 10:37:46.868167   27980 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:37:46.868179   27980 start_flags.go:319] config:
	{Name:embed-certs-777000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-777000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:37:44.202396   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:44.331144   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:44.357057   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.357071   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:44.357139   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:44.380881   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.380896   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:44.380969   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:44.406888   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.406902   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:44.406971   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:44.461959   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.461974   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:44.462038   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:44.485167   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.485182   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:44.485250   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:44.510226   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.510239   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:44.510340   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:44.533891   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.533906   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:44.533977   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:44.557400   27263 logs.go:279] 0 containers: []
	W0124 10:37:44.557415   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:44.557424   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:44.557431   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:44.596718   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:44.596732   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:44.609225   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:44.609242   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:44.665845   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:44.665856   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:44.665863   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:44.681616   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:44.681629   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:46.732928   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050989323s)
	I0124 10:37:46.890325   27980 out.go:177] * Starting control plane node embed-certs-777000 in cluster embed-certs-777000
	I0124 10:37:46.932770   27980 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 10:37:46.953903   27980 out.go:177] * Pulling base image ...
	I0124 10:37:46.995924   27980 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:37:46.995924   27980 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 10:37:46.996037   27980 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 10:37:46.996058   27980 cache.go:57] Caching tarball of preloaded images
	I0124 10:37:46.996274   27980 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 10:37:46.996294   27980 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0124 10:37:46.997350   27980 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/config.json ...
	I0124 10:37:47.052995   27980 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 10:37:47.053012   27980 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 10:37:47.053037   27980 cache.go:193] Successfully downloaded all kic artifacts
	I0124 10:37:47.053145   27980 start.go:364] acquiring machines lock for embed-certs-777000: {Name:mk9cfd38639e45be5b1f7891b1b30b00625ecce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 10:37:47.053237   27980 start.go:368] acquired machines lock for "embed-certs-777000" in 73.591µs
	I0124 10:37:47.053259   27980 start.go:96] Skipping create...Using existing machine configuration
	I0124 10:37:47.053268   27980 fix.go:55] fixHost starting: 
	I0124 10:37:47.053511   27980 cli_runner.go:164] Run: docker container inspect embed-certs-777000 --format={{.State.Status}}
	I0124 10:37:47.109349   27980 fix.go:103] recreateIfNeeded on embed-certs-777000: state=Stopped err=<nil>
	W0124 10:37:47.109395   27980 fix.go:129] unexpected machine state, will restart: <nil>
	I0124 10:37:47.152820   27980 out.go:177] * Restarting existing docker container for "embed-certs-777000" ...
	I0124 10:37:47.174362   27980 cli_runner.go:164] Run: docker start embed-certs-777000
	I0124 10:37:47.518166   27980 cli_runner.go:164] Run: docker container inspect embed-certs-777000 --format={{.State.Status}}
	I0124 10:37:47.581380   27980 kic.go:426] container "embed-certs-777000" state is running.
	I0124 10:37:47.581960   27980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-777000
	I0124 10:37:47.644705   27980 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/config.json ...
	I0124 10:37:47.645205   27980 machine.go:88] provisioning docker machine ...
	I0124 10:37:47.645233   27980 ubuntu.go:169] provisioning hostname "embed-certs-777000"
	I0124 10:37:47.645335   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:47.714760   27980 main.go:141] libmachine: Using SSH client type: native
	I0124 10:37:47.715594   27980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55677 <nil> <nil>}
	I0124 10:37:47.715789   27980 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-777000 && echo "embed-certs-777000" | sudo tee /etc/hostname
	I0124 10:37:47.878301   27980 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-777000
	
	I0124 10:37:47.878470   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:47.939615   27980 main.go:141] libmachine: Using SSH client type: native
	I0124 10:37:47.939792   27980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55677 <nil> <nil>}
	I0124 10:37:47.939812   27980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-777000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-777000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-777000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 10:37:48.072610   27980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:37:48.072629   27980 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 10:37:48.072649   27980 ubuntu.go:177] setting up certificates
	I0124 10:37:48.072661   27980 provision.go:83] configureAuth start
	I0124 10:37:48.072726   27980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-777000
	I0124 10:37:48.132418   27980 provision.go:138] copyHostCerts
	I0124 10:37:48.132530   27980 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 10:37:48.132541   27980 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 10:37:48.132644   27980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 10:37:48.132855   27980 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 10:37:48.132862   27980 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 10:37:48.132963   27980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 10:37:48.133116   27980 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 10:37:48.133122   27980 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 10:37:48.133191   27980 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 10:37:48.133315   27980 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.embed-certs-777000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-777000]
	I0124 10:37:48.239087   27980 provision.go:172] copyRemoteCerts
	I0124 10:37:48.239155   27980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 10:37:48.239210   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:48.299267   27980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55677 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/embed-certs-777000/id_rsa Username:docker}
	I0124 10:37:48.397388   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0124 10:37:48.414750   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 10:37:48.432317   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0124 10:37:48.449457   27980 provision.go:86] duration metric: configureAuth took 376.736634ms
	I0124 10:37:48.449472   27980 ubuntu.go:193] setting minikube options for container-runtime
	I0124 10:37:48.449686   27980 config.go:180] Loaded profile config "embed-certs-777000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:37:48.449758   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:48.507891   27980 main.go:141] libmachine: Using SSH client type: native
	I0124 10:37:48.508057   27980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55677 <nil> <nil>}
	I0124 10:37:48.508066   27980 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 10:37:48.645156   27980 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 10:37:48.645179   27980 ubuntu.go:71] root file system type: overlay
	I0124 10:37:48.645325   27980 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 10:37:48.645417   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:48.703354   27980 main.go:141] libmachine: Using SSH client type: native
	I0124 10:37:48.703511   27980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55677 <nil> <nil>}
	I0124 10:37:48.703563   27980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 10:37:48.847135   27980 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 10:37:48.847243   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:48.907478   27980 main.go:141] libmachine: Using SSH client type: native
	I0124 10:37:48.907633   27980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55677 <nil> <nil>}
	I0124 10:37:48.907652   27980 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 10:37:49.048336   27980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:37:49.048352   27980 machine.go:91] provisioned docker machine in 1.402967027s
	I0124 10:37:49.048363   27980 start.go:300] post-start starting for "embed-certs-777000" (driver="docker")
	I0124 10:37:49.048368   27980 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 10:37:49.048429   27980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 10:37:49.048478   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:49.109581   27980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55677 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/embed-certs-777000/id_rsa Username:docker}
	I0124 10:37:49.206583   27980 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 10:37:49.210367   27980 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 10:37:49.210386   27980 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 10:37:49.210394   27980 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 10:37:49.210399   27980 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 10:37:49.210407   27980 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 10:37:49.210519   27980 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 10:37:49.210698   27980 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 10:37:49.210910   27980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 10:37:49.218422   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:37:49.235758   27980 start.go:303] post-start completed in 187.365307ms
	I0124 10:37:49.235855   27980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 10:37:49.235913   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:49.295525   27980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55677 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/embed-certs-777000/id_rsa Username:docker}
	I0124 10:37:49.387579   27980 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 10:37:49.392754   27980 fix.go:57] fixHost completed within 2.339194977s
	I0124 10:37:49.392773   27980 start.go:83] releasing machines lock for "embed-certs-777000", held for 2.33924048s
	I0124 10:37:49.392867   27980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-777000
	I0124 10:37:49.456563   27980 ssh_runner.go:195] Run: cat /version.json
	I0124 10:37:49.456572   27980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0124 10:37:49.456627   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:49.456657   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:49.529948   27980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55677 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/embed-certs-777000/id_rsa Username:docker}
	I0124 10:37:49.529955   27980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55677 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/embed-certs-777000/id_rsa Username:docker}
	I0124 10:37:49.682201   27980 ssh_runner.go:195] Run: systemctl --version
	I0124 10:37:49.688805   27980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 10:37:49.695411   27980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 10:37:49.714018   27980 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 10:37:49.714159   27980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0124 10:37:49.724051   27980 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0124 10:37:49.737450   27980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0124 10:37:49.745985   27980 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0124 10:37:49.746006   27980 start.go:472] detecting cgroup driver to use...
	I0124 10:37:49.746027   27980 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:37:49.746118   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:37:49.759223   27980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0124 10:37:49.767943   27980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 10:37:49.777149   27980 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 10:37:49.777219   27980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 10:37:49.786182   27980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:37:49.795464   27980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 10:37:49.803943   27980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:37:49.812517   27980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 10:37:49.820652   27980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 10:37:49.829757   27980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 10:37:49.837319   27980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 10:37:49.844760   27980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:37:49.912845   27980 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 10:37:49.983511   27980 start.go:472] detecting cgroup driver to use...
	I0124 10:37:49.983532   27980 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:37:49.983578   27980 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 10:37:49.995467   27980 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 10:37:49.995539   27980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 10:37:50.006722   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:37:50.022101   27980 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 10:37:50.159923   27980 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 10:37:50.227408   27980 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 10:37:50.227426   27980 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 10:37:50.265705   27980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:37:50.368348   27980 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 10:37:50.624074   27980 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:37:50.694472   27980 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 10:37:50.766601   27980 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:37:50.837769   27980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:37:50.910506   27980 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 10:37:50.938011   27980 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 10:37:50.938098   27980 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 10:37:50.942255   27980 start.go:540] Will wait 60s for crictl version
	I0124 10:37:50.942304   27980 ssh_runner.go:195] Run: which crictl
	I0124 10:37:50.946041   27980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 10:37:51.064684   27980 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 10:37:51.064763   27980 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:37:51.094733   27980 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:37:51.167881   27980 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 10:37:51.168004   27980 cli_runner.go:164] Run: docker exec -t embed-certs-777000 dig +short host.docker.internal
	I0124 10:37:49.234335   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:49.330644   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:49.354692   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.354705   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:49.354775   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:49.378346   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.378359   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:49.378425   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:49.405532   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.405546   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:49.405620   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:49.431408   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.431422   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:49.431488   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:49.459304   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.459320   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:49.459374   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:49.489169   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.489182   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:49.489274   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:49.519973   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.519990   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:49.520080   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:49.550475   27263 logs.go:279] 0 containers: []
	W0124 10:37:49.550487   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:49.550494   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:49.550503   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:49.591869   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:49.591885   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:49.604324   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:49.604339   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:49.666052   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:49.666067   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:49.666076   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:49.685136   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:49.685170   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:51.739579   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054178782s)
	I0124 10:37:51.304480   27980 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:37:51.304585   27980 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:37:51.309106   27980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:37:51.319132   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:51.379705   27980 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:37:51.379777   27980 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:37:51.405099   27980 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:37:51.405133   27980 docker.go:560] Images already preloaded, skipping extraction
	I0124 10:37:51.405213   27980 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:37:51.431212   27980 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:37:51.431232   27980 cache_images.go:84] Images are preloaded, skipping loading
	I0124 10:37:51.431338   27980 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 10:37:51.504476   27980 cni.go:84] Creating CNI manager for ""
	I0124 10:37:51.504496   27980 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:37:51.504520   27980 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0124 10:37:51.504544   27980 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-777000 NodeName:embed-certs-777000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 10:37:51.504685   27980 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-777000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 10:37:51.504773   27980 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-777000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-777000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 10:37:51.504838   27980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0124 10:37:51.513587   27980 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 10:37:51.513690   27980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 10:37:51.557011   27980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0124 10:37:51.570423   27980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 10:37:51.583684   27980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0124 10:37:51.597143   27980 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0124 10:37:51.601740   27980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:37:51.611779   27980 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000 for IP: 192.168.67.2
	I0124 10:37:51.611797   27980 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:37:51.612025   27980 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 10:37:51.612089   27980 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 10:37:51.612198   27980 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/client.key
	I0124 10:37:51.612301   27980 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/apiserver.key.c7fa3a9e
	I0124 10:37:51.612381   27980 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/proxy-client.key
	I0124 10:37:51.612664   27980 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 10:37:51.612709   27980 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 10:37:51.612719   27980 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 10:37:51.612761   27980 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 10:37:51.612796   27980 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 10:37:51.612826   27980 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 10:37:51.612898   27980 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:37:51.613463   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 10:37:51.631922   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0124 10:37:51.649360   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 10:37:51.668019   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/embed-certs-777000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0124 10:37:51.685793   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 10:37:51.703261   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 10:37:51.721489   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 10:37:51.740881   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 10:37:51.758779   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 10:37:51.776728   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 10:37:51.794401   27980 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 10:37:51.812311   27980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 10:37:51.825783   27980 ssh_runner.go:195] Run: openssl version
	I0124 10:37:51.831562   27980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 10:37:51.840468   27980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:37:51.844809   27980 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:37:51.844853   27980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:37:51.850261   27980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 10:37:51.857938   27980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 10:37:51.866345   27980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 10:37:51.870396   27980 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 10:37:51.870489   27980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 10:37:51.876874   27980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 10:37:51.885136   27980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 10:37:51.894563   27980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 10:37:51.899242   27980 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 10:37:51.899311   27980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 10:37:51.905283   27980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 10:37:51.914130   27980 kubeadm.go:401] StartCluster: {Name:embed-certs-777000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-777000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:37:51.914258   27980 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:37:51.939973   27980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 10:37:51.948489   27980 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0124 10:37:51.948505   27980 kubeadm.go:633] restartCluster start
	I0124 10:37:51.948575   27980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0124 10:37:51.956300   27980 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:51.956390   27980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-777000
	I0124 10:37:52.017521   27980 kubeconfig.go:135] verify returned: extract IP: "embed-certs-777000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:37:52.017700   27980 kubeconfig.go:146] "embed-certs-777000" context is missing from /Users/jenkins/minikube-integration/15565-3057/kubeconfig - will repair!
	I0124 10:37:52.018066   27980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/kubeconfig: {Name:mk581b13c705409309a542f9aac4783c330d27c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:37:52.019494   27980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0124 10:37:52.027630   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:52.027696   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:52.036745   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:52.538911   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:52.539169   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:52.550350   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:53.038278   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:53.038405   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:53.049132   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:53.538643   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:53.538730   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:53.548079   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:54.037449   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:54.037652   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:54.048731   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:54.537043   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:54.537174   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:54.548244   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:55.038120   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:55.038273   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:55.049326   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:55.538732   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:55.538886   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:55.550354   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:56.037191   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:56.037295   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:56.047275   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:54.242100   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:54.331877   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:54.358124   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.358138   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:54.358206   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:54.381880   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.381893   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:54.381962   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:54.406255   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.406270   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:54.406355   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:54.430379   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.430392   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:54.430462   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:54.453277   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.453291   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:54.453361   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:54.477506   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.477519   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:54.477588   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:54.500280   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.500295   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:54.500364   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:54.523649   27263 logs.go:279] 0 containers: []
	W0124 10:37:54.523664   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:54.523671   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:37:54.523677   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:37:54.564398   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:37:54.564411   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:37:54.576590   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:37:54.576603   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:37:54.632595   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:37:54.632635   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:54.632643   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:54.648988   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:54.649000   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:37:56.700531   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0513572s)
	I0124 10:37:56.537741   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:56.537861   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:56.548892   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:57.038097   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:57.038203   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:57.047357   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:57.538502   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:57.538726   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:57.549870   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:58.037833   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:58.037945   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:58.049255   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:58.537411   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:58.537514   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:58.547625   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:59.039445   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:59.039677   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:59.051043   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:59.538147   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:37:59.538215   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:37:59.548528   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:00.037575   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:38:00.037656   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:38:00.047190   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:00.538166   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:38:00.538273   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:38:00.549315   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:01.038928   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:38:01.039055   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:38:01.050067   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:37:59.203014   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:37:59.332298   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:37:59.356942   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.356957   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:37:59.357031   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:37:59.380149   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.380163   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:37:59.380241   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:37:59.404417   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.404430   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:37:59.404510   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:37:59.429602   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.429618   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:37:59.429688   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:37:59.476195   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.476209   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:37:59.476278   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:37:59.501120   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.501137   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:37:59.501231   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:37:59.525455   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.525469   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:37:59.525537   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:37:59.550487   27263 logs.go:279] 0 containers: []
	W0124 10:37:59.550499   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:37:59.550506   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:37:59.550513   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:37:59.566119   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:37:59.566131   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:01.618086   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051821613s)
	I0124 10:38:01.618192   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:01.618199   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:01.656904   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:01.656921   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:01.670866   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:01.670881   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:01.729369   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:01.537594   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:38:01.537701   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:38:01.547840   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:02.039685   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:38:02.039858   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:38:02.050866   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:02.050876   27980 api_server.go:165] Checking apiserver status ...
	I0124 10:38:02.050928   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:38:02.059262   27980 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:02.059274   27980 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0124 10:38:02.059278   27980 kubeadm.go:1120] stopping kube-system containers ...
	I0124 10:38:02.059364   27980 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:38:02.084434   27980 docker.go:456] Stopping containers: [1e8c77f7c404 e16ba54b7c8b 69e514fae829 e708abb834d9 2679ac587d56 738010b647a7 43732829da3d d63acd31d720 6ecdb02936db 2843c097f4ea 7e2aca0f8230 a6dd9d374d1b dcd4d9a2bd7a 5543fc2ed827 b7e6ef5c3c9e]
	I0124 10:38:02.084521   27980 ssh_runner.go:195] Run: docker stop 1e8c77f7c404 e16ba54b7c8b 69e514fae829 e708abb834d9 2679ac587d56 738010b647a7 43732829da3d d63acd31d720 6ecdb02936db 2843c097f4ea 7e2aca0f8230 a6dd9d374d1b dcd4d9a2bd7a 5543fc2ed827 b7e6ef5c3c9e
	I0124 10:38:02.109664   27980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0124 10:38:02.120233   27980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:38:02.127941   27980 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 24 18:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 24 18:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan 24 18:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 24 18:36 /etc/kubernetes/scheduler.conf
	
	I0124 10:38:02.127997   27980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0124 10:38:02.135737   27980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0124 10:38:02.143406   27980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0124 10:38:02.150683   27980 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:02.150731   27980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0124 10:38:02.158070   27980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0124 10:38:02.165831   27980 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:38:02.165880   27980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0124 10:38:02.173223   27980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:38:02.180842   27980 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0124 10:38:02.180860   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:38:02.270771   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:38:02.737297   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:38:02.875918   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:38:02.966922   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:38:03.079435   27980 api_server.go:51] waiting for apiserver process to appear ...
	I0124 10:38:03.079513   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:03.594594   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:04.093316   27980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:04.164884   27980 api_server.go:71] duration metric: took 1.085401435s to wait for apiserver process to appear ...
	I0124 10:38:04.164904   27980 api_server.go:87] waiting for apiserver healthz status ...
	I0124 10:38:04.164918   27980 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55676/healthz ...
	I0124 10:38:04.166598   27980 api_server.go:268] stopped: https://127.0.0.1:55676/healthz: Get "https://127.0.0.1:55676/healthz": EOF
	I0124 10:38:04.666776   27980 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55676/healthz ...
	I0124 10:38:04.229839   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:04.332165   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:04.358102   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.358115   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:04.358184   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:04.385371   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.385387   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:04.385474   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:04.412963   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.412981   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:04.413067   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:04.437616   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.437628   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:04.437698   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:04.462071   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.462086   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:04.462164   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:04.487734   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.487764   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:04.487847   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:04.517217   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.517236   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:04.517314   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:04.541979   27263 logs.go:279] 0 containers: []
	W0124 10:38:04.541992   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:04.542000   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:04.542006   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:04.558033   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:04.558047   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:06.614081   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055928178s)
	I0124 10:38:06.614203   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:06.614212   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:06.653837   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:06.653851   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:06.667066   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:06.667085   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:06.725130   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:06.384351   27980 api_server.go:278] https://127.0.0.1:55676/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0124 10:38:06.384371   27980 api_server.go:102] status: https://127.0.0.1:55676/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0124 10:38:06.666816   27980 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55676/healthz ...
	I0124 10:38:06.672069   27980 api_server.go:278] https://127.0.0.1:55676/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0124 10:38:06.672086   27980 api_server.go:102] status: https://127.0.0.1:55676/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:38:07.166875   27980 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55676/healthz ...
	I0124 10:38:07.172860   27980 api_server.go:278] https://127.0.0.1:55676/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0124 10:38:07.172872   27980 api_server.go:102] status: https://127.0.0.1:55676/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:38:07.667319   27980 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55676/healthz ...
	I0124 10:38:07.672482   27980 api_server.go:278] https://127.0.0.1:55676/healthz returned 200:
	ok
	I0124 10:38:07.679326   27980 api_server.go:140] control plane version: v1.26.1
	I0124 10:38:07.679340   27980 api_server.go:130] duration metric: took 3.514277766s to wait for apiserver health ...
	I0124 10:38:07.679346   27980 cni.go:84] Creating CNI manager for ""
	I0124 10:38:07.679355   27980 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:38:07.700454   27980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0124 10:38:07.721258   27980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0124 10:38:07.730905   27980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0124 10:38:07.744588   27980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0124 10:38:07.751983   27980 system_pods.go:59] 8 kube-system pods found
	I0124 10:38:07.751998   27980 system_pods.go:61] "coredns-787d4945fb-zx6lw" [59dffed1-7e55-46dd-b69d-3d8b73081ecf] Running
	I0124 10:38:07.752002   27980 system_pods.go:61] "etcd-embed-certs-777000" [5c40f0b8-9b3a-4819-8e52-fa714d12dbd6] Running
	I0124 10:38:07.752006   27980 system_pods.go:61] "kube-apiserver-embed-certs-777000" [0e9fe729-282e-4a20-a5d4-6cabc93c5c56] Running
	I0124 10:38:07.752010   27980 system_pods.go:61] "kube-controller-manager-embed-certs-777000" [83294568-e8ee-4bc9-9c4b-5e2c8a4e9e77] Running
	I0124 10:38:07.752013   27980 system_pods.go:61] "kube-proxy-77t2j" [bb51932a-f335-4ee3-a03b-d3fc2c45d779] Running
	I0124 10:38:07.752017   27980 system_pods.go:61] "kube-scheduler-embed-certs-777000" [e716b953-5b9f-4c49-a132-ed48b5faa8c4] Running
	I0124 10:38:07.752025   27980 system_pods.go:61] "metrics-server-7997d45854-s2t27" [9d5bd20c-b08e-4d05-802f-199788982f33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0124 10:38:07.752030   27980 system_pods.go:61] "storage-provisioner" [9f51e35d-341c-46b3-bef2-823b95df95a3] Running
	I0124 10:38:07.752034   27980 system_pods.go:74] duration metric: took 7.435546ms to wait for pod list to return data ...
	I0124 10:38:07.752041   27980 node_conditions.go:102] verifying NodePressure condition ...
	I0124 10:38:07.755692   27980 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0124 10:38:07.755706   27980 node_conditions.go:123] node cpu capacity is 6
	I0124 10:38:07.755714   27980 node_conditions.go:105] duration metric: took 3.669486ms to run NodePressure ...
	I0124 10:38:07.755726   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:38:07.895233   27980 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0124 10:38:07.899455   27980 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0124 10:38:08.182785   27980 kubeadm.go:784] kubelet initialised
	I0124 10:38:08.182796   27980 kubeadm.go:785] duration metric: took 287.534924ms waiting for restarted kubelet to initialise ...
	I0124 10:38:08.182803   27980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0124 10:38:08.187694   27980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-zx6lw" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.192771   27980 pod_ready.go:92] pod "coredns-787d4945fb-zx6lw" in "kube-system" namespace has status "Ready":"True"
	I0124 10:38:08.192779   27980 pod_ready.go:81] duration metric: took 5.073088ms waiting for pod "coredns-787d4945fb-zx6lw" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.192786   27980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.197410   27980 pod_ready.go:92] pod "etcd-embed-certs-777000" in "kube-system" namespace has status "Ready":"True"
	I0124 10:38:08.197418   27980 pod_ready.go:81] duration metric: took 4.627801ms waiting for pod "etcd-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.197424   27980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.202652   27980 pod_ready.go:92] pod "kube-apiserver-embed-certs-777000" in "kube-system" namespace has status "Ready":"True"
	I0124 10:38:08.202661   27980 pod_ready.go:81] duration metric: took 5.232543ms waiting for pod "kube-apiserver-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.202668   27980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.348584   27980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-777000" in "kube-system" namespace has status "Ready":"True"
	I0124 10:38:08.348597   27980 pod_ready.go:81] duration metric: took 145.91802ms waiting for pod "kube-controller-manager-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.348607   27980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-77t2j" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.749443   27980 pod_ready.go:92] pod "kube-proxy-77t2j" in "kube-system" namespace has status "Ready":"True"
	I0124 10:38:08.749459   27980 pod_ready.go:81] duration metric: took 400.829529ms waiting for pod "kube-proxy-77t2j" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:08.749475   27980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:11.173012   27980 pod_ready.go:102] pod "kube-scheduler-embed-certs-777000" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:09.225927   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:09.331764   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:09.356116   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.356130   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:09.356199   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:09.380987   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.381003   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:09.381084   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:09.406172   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.406186   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:09.406261   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:09.432250   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.432269   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:09.432397   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:09.457997   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.458010   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:09.458079   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:09.484721   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.484734   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:09.484810   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:09.513208   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.513222   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:09.513303   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:09.539436   27263 logs.go:279] 0 containers: []
	W0124 10:38:09.539449   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:09.539457   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:09.539464   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:09.582963   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:09.582989   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:09.598615   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:09.598636   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:09.659498   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:09.659511   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:09.659518   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:09.680325   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:09.680343   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:11.732480   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052056553s)
	I0124 10:38:13.173157   27980 pod_ready.go:102] pod "kube-scheduler-embed-certs-777000" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:15.173822   27980 pod_ready.go:102] pod "kube-scheduler-embed-certs-777000" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:14.234398   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:14.333390   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:14.359210   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.359225   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:14.359293   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:14.384535   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.384550   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:14.384636   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:14.410296   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.410310   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:14.410390   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:14.464844   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.464860   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:14.464952   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:14.491148   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.491166   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:14.491250   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:14.517076   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.517090   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:14.517160   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:14.541340   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.541353   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:14.541420   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:14.565448   27263 logs.go:279] 0 containers: []
	W0124 10:38:14.565462   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:14.565468   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:14.565476   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:14.606381   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:14.606399   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:14.619569   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:14.619620   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:14.677455   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:14.677468   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:14.677475   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:14.693797   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:14.693812   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:16.742142   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048264861s)
	I0124 10:38:17.671753   27980 pod_ready.go:102] pod "kube-scheduler-embed-certs-777000" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:18.672085   27980 pod_ready.go:92] pod "kube-scheduler-embed-certs-777000" in "kube-system" namespace has status "Ready":"True"
	I0124 10:38:18.672100   27980 pod_ready.go:81] duration metric: took 9.922327933s waiting for pod "kube-scheduler-embed-certs-777000" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:18.672107   27980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace to be "Ready" ...
	I0124 10:38:20.684348   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:19.243211   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:19.332259   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:19.357246   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.357259   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:19.357327   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:19.381046   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.381060   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:19.381131   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:19.404015   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.404029   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:19.404097   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:19.427316   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.427331   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:19.427402   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:19.450249   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.450263   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:19.450347   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:19.473571   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.473584   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:19.473657   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:19.497296   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.497310   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:19.497387   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:19.520416   27263 logs.go:279] 0 containers: []
	W0124 10:38:19.520430   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:19.520437   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:19.520458   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:19.561553   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:19.561568   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:19.575044   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:19.575061   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:19.633197   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:19.633211   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:19.633217   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:19.651602   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:19.651618   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:21.700964   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049287915s)
	I0124 10:38:23.184424   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:25.682816   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:24.201310   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:24.333111   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:24.369141   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.369164   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:24.369245   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:24.395205   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.395220   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:24.395303   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:24.418623   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.418638   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:24.418707   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:24.442955   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.442971   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:24.443050   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:24.468455   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.468468   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:24.468542   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:24.493839   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.493854   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:24.493931   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:24.517513   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.517527   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:24.517648   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:24.542201   27263 logs.go:279] 0 containers: []
	W0124 10:38:24.542214   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:24.542221   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:24.542228   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:24.585211   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:24.585227   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:24.597475   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:24.597488   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:24.653771   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:24.653809   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:24.653817   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:24.670336   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:24.670354   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:26.719860   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049459327s)
	I0124 10:38:27.684450   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:30.182835   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:29.222256   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:29.333338   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:38:29.358758   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.358773   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:38:29.358841   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:38:29.383138   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.383151   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:38:29.383219   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:38:29.409311   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.409329   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:38:29.409405   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:38:29.434963   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.434977   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:38:29.435046   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:38:29.478172   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.478185   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:38:29.478253   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:38:29.503290   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.503303   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:38:29.503374   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:38:29.527505   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.527555   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:38:29.527627   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:38:29.551837   27263 logs.go:279] 0 containers: []
	W0124 10:38:29.551851   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:38:29.551858   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:38:29.551866   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:38:29.590937   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:38:29.590952   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:38:29.603043   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:38:29.603056   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:38:29.659792   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0124 10:38:29.659803   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:38:29.659814   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:38:29.675399   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:38:29.675413   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:38:31.723686   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048231613s)
	I0124 10:38:34.224839   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:38:34.235807   27263 kubeadm.go:637] restartCluster took 4m12.05588921s
	W0124 10:38:34.235897   27263 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0124 10:38:34.235912   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0124 10:38:34.649714   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:38:34.659537   27263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:38:34.667363   27263 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:38:34.667417   27263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:38:34.675161   27263 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:38:34.675191   27263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:38:34.725606   27263 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:38:34.726125   27263 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:38:35.037119   27263 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:38:35.037235   27263 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:38:35.037355   27263 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:38:35.270689   27263 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:38:35.271623   27263 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:38:35.278471   27263 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:38:35.354320   27263 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:38:35.375966   27263 out.go:204]   - Generating certificates and keys ...
	I0124 10:38:35.376042   27263 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:38:35.376122   27263 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:38:35.376188   27263 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 10:38:35.376268   27263 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 10:38:35.376354   27263 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 10:38:35.376409   27263 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 10:38:35.376466   27263 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 10:38:35.376541   27263 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 10:38:35.376592   27263 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 10:38:35.376639   27263 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 10:38:35.376665   27263 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 10:38:35.376701   27263 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:38:35.482769   27263 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:38:35.599001   27263 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:38:35.877612   27263 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:38:35.972431   27263 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:38:35.973146   27263 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:38:32.184254   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:34.185046   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:36.185516   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:35.994635   27263 out.go:204]   - Booting up control plane ...
	I0124 10:38:35.994755   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:38:35.994844   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:38:35.994933   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:38:35.995044   27263 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:38:35.995208   27263 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:38:38.684171   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:41.183083   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:43.183432   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:45.185442   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:47.684985   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:50.183474   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:52.185032   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:54.682552   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:56.685231   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:38:59.183519   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:01.183600   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:03.186900   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:05.683604   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:07.684422   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:10.185330   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:12.684130   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:14.684472   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:15.982797   27263 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:39:15.983145   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:15.983299   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:39:16.684800   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:19.185493   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:21.195191   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:20.984460   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:20.984655   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:39:23.685052   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:26.183501   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:28.185064   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:30.683464   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:30.986119   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:30.986278   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:39:32.684671   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:34.685025   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:37.183140   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:39.183841   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:41.184144   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:43.184924   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:45.683216   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:47.684934   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:50.183765   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:50.987331   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:39:50.987478   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:39:52.185928   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:54.684607   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:56.686677   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:39:58.686996   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:01.188104   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:03.683565   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:05.684880   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:08.184014   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:10.184158   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:12.683495   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:14.685553   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:16.686162   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:19.183419   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:21.184785   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:23.185689   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:25.685369   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:28.186537   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:30.684071   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:30.989601   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:40:30.989865   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:40:30.989880   27263 kubeadm.go:322] 
	I0124 10:40:30.989918   27263 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:40:30.989980   27263 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:40:30.990003   27263 kubeadm.go:322] 
	I0124 10:40:30.990073   27263 kubeadm.go:322] This error is likely caused by:
	I0124 10:40:30.990115   27263 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:40:30.990231   27263 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:40:30.990239   27263 kubeadm.go:322] 
	I0124 10:40:30.990386   27263 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:40:30.990452   27263 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:40:30.990534   27263 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:40:30.990563   27263 kubeadm.go:322] 
	I0124 10:40:30.990673   27263 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:40:30.990796   27263 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:40:30.990862   27263 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:40:30.990912   27263 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:40:30.990985   27263 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:40:30.991014   27263 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:40:30.993758   27263 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:40:30.993824   27263 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:40:30.993962   27263 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:40:30.994055   27263 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:40:30.994153   27263 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:40:30.994240   27263 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0124 10:40:30.994390   27263 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0124 10:40:30.994417   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0124 10:40:31.409898   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:40:31.419728   27263 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:40:31.419783   27263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:40:31.427528   27263 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:40:31.427548   27263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:40:31.476171   27263 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0124 10:40:31.476224   27263 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:40:31.783986   27263 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:40:31.784084   27263 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:40:31.784177   27263 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:40:32.013779   27263 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:40:32.014538   27263 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:40:32.021443   27263 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0124 10:40:32.097023   27263 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:40:32.118655   27263 out.go:204]   - Generating certificates and keys ...
	I0124 10:40:32.118789   27263 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:40:32.118868   27263 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:40:32.118939   27263 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 10:40:32.119036   27263 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 10:40:32.119124   27263 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 10:40:32.119203   27263 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 10:40:32.119272   27263 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 10:40:32.119313   27263 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 10:40:32.119366   27263 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 10:40:32.119412   27263 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 10:40:32.119441   27263 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 10:40:32.119499   27263 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:40:32.155667   27263 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:40:32.409107   27263 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:40:32.615328   27263 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:40:32.678101   27263 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:40:32.678779   27263 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:40:32.685879   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:34.686901   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:32.700218   27263 out.go:204]   - Booting up control plane ...
	I0124 10:40:32.700289   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:40:32.700356   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:40:32.700420   27263 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:40:32.700480   27263 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:40:32.700600   27263 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:40:37.185046   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:39.185226   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:41.185495   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:43.685324   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:46.185010   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:48.685175   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:51.185689   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:53.684499   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:56.185578   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:40:58.685229   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:01.184616   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:03.687755   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:06.185748   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:08.683635   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:10.684483   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:13.184277   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:15.185388   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:12.689848   27263 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0124 10:41:12.690483   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:12.690617   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:41:17.185461   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:19.686969   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:17.691224   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:17.691386   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:41:22.184259   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:24.184882   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:26.184935   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:28.684442   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:30.685140   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:27.692167   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:27.692348   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:41:33.186279   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:35.684167   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:37.684934   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:39.687476   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:42.186186   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:44.685909   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:47.186139   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:49.686741   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:47.693321   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:41:47.693485   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:41:52.184561   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:54.185861   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:56.684028   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:41:58.685682   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:00.686963   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:03.185947   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:05.684404   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:07.685678   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:09.686626   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:12.185571   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:14.684718   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:16.686158   27980 pod_ready.go:102] pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace has status "Ready":"False"
	I0124 10:42:18.680706   27980 pod_ready.go:81] duration metric: took 4m0.006782977s waiting for pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace to be "Ready" ...
	E0124 10:42:18.680733   27980 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7997d45854-s2t27" in "kube-system" namespace to be "Ready" (will not retry!)
	I0124 10:42:18.680750   27980 pod_ready.go:38] duration metric: took 4m10.495823526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0124 10:42:18.680777   27980 kubeadm.go:637] restartCluster took 4m26.729125538s
	W0124 10:42:18.680911   27980 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0124 10:42:18.680940   27980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0124 10:42:23.018557   27980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.337574437s)
	I0124 10:42:23.018634   27980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:42:23.028829   27980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:42:23.036796   27980 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0124 10:42:23.036880   27980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:42:23.044653   27980 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0124 10:42:23.044682   27980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0124 10:42:23.093192   27980 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0124 10:42:23.093239   27980 kubeadm.go:322] [preflight] Running pre-flight checks
	I0124 10:42:23.202257   27980 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0124 10:42:23.202440   27980 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0124 10:42:23.202575   27980 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0124 10:42:23.333929   27980 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0124 10:42:23.355651   27980 out.go:204]   - Generating certificates and keys ...
	I0124 10:42:23.355725   27980 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0124 10:42:23.355783   27980 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0124 10:42:23.355848   27980 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0124 10:42:23.355894   27980 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0124 10:42:23.355961   27980 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0124 10:42:23.356012   27980 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0124 10:42:23.356088   27980 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0124 10:42:23.356173   27980 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0124 10:42:23.356255   27980 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0124 10:42:23.356308   27980 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0124 10:42:23.356334   27980 kubeadm.go:322] [certs] Using the existing "sa" key
	I0124 10:42:23.356388   27980 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0124 10:42:23.533998   27980 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0124 10:42:23.794492   27980 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0124 10:42:23.886122   27980 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0124 10:42:24.074240   27980 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0124 10:42:24.086243   27980 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0124 10:42:24.086896   27980 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0124 10:42:24.086966   27980 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0124 10:42:24.163462   27980 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0124 10:42:24.184860   27980 out.go:204]   - Booting up control plane ...
	I0124 10:42:24.184946   27980 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0124 10:42:24.185040   27980 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0124 10:42:24.185084   27980 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0124 10:42:24.185139   27980 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0124 10:42:24.185272   27980 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0124 10:42:27.694732   27263 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0124 10:42:27.694874   27263 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0124 10:42:27.694886   27263 kubeadm.go:322] 
	I0124 10:42:27.694915   27263 kubeadm.go:322] Unfortunately, an error has occurred:
	I0124 10:42:27.694950   27263 kubeadm.go:322] 	timed out waiting for the condition
	I0124 10:42:27.694956   27263 kubeadm.go:322] 
	I0124 10:42:27.694988   27263 kubeadm.go:322] This error is likely caused by:
	I0124 10:42:27.695013   27263 kubeadm.go:322] 	- The kubelet is not running
	I0124 10:42:27.695130   27263 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0124 10:42:27.695141   27263 kubeadm.go:322] 
	I0124 10:42:27.695219   27263 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0124 10:42:27.695259   27263 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0124 10:42:27.695293   27263 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0124 10:42:27.695300   27263 kubeadm.go:322] 
	I0124 10:42:27.695508   27263 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0124 10:42:27.695579   27263 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0124 10:42:27.695661   27263 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0124 10:42:27.695709   27263 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0124 10:42:27.695799   27263 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0124 10:42:27.695826   27263 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0124 10:42:27.698953   27263 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0124 10:42:27.699027   27263 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0124 10:42:27.699185   27263 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0124 10:42:27.699283   27263 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:42:27.699352   27263 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0124 10:42:27.699417   27263 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0124 10:42:27.699437   27263 kubeadm.go:403] StartCluster complete in 8m5.550644532s
	I0124 10:42:27.699525   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0124 10:42:27.725130   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.725144   27263 logs.go:281] No container was found matching "kube-apiserver"
	I0124 10:42:27.725281   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0124 10:42:27.749346   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.749360   27263 logs.go:281] No container was found matching "etcd"
	I0124 10:42:27.749433   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0124 10:42:27.774887   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.774903   27263 logs.go:281] No container was found matching "coredns"
	I0124 10:42:27.774977   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0124 10:42:27.818348   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.818364   27263 logs.go:281] No container was found matching "kube-scheduler"
	I0124 10:42:27.818452   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0124 10:42:27.842586   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.842601   27263 logs.go:281] No container was found matching "kube-proxy"
	I0124 10:42:27.842678   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0124 10:42:27.873567   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.873585   27263 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0124 10:42:27.873662   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0124 10:42:27.904877   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.904892   27263 logs.go:281] No container was found matching "storage-provisioner"
	I0124 10:42:27.904968   27263 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0124 10:42:27.931921   27263 logs.go:279] 0 containers: []
	W0124 10:42:27.931935   27263 logs.go:281] No container was found matching "kube-controller-manager"
	I0124 10:42:27.931942   27263 logs.go:124] Gathering logs for Docker ...
	I0124 10:42:27.931949   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0124 10:42:27.947814   27263 logs.go:124] Gathering logs for container status ...
	I0124 10:42:27.947828   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0124 10:42:30.001529   27263 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053676358s)
	I0124 10:42:30.001677   27263 logs.go:124] Gathering logs for kubelet ...
	I0124 10:42:30.001687   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0124 10:42:30.043303   27263 logs.go:124] Gathering logs for dmesg ...
	I0124 10:42:30.043320   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0124 10:42:30.055895   27263 logs.go:124] Gathering logs for describe nodes ...
	I0124 10:42:30.055911   27263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0124 10:42:30.114077   27263 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0124 10:42:30.114093   27263 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0124 10:42:30.114113   27263 out.go:239] * 
	W0124 10:42:30.114240   27263 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:42:30.114277   27263 out.go:239] * 
	W0124 10:42:30.114974   27263 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0124 10:42:30.196610   27263 out.go:177] 
	W0124 10:42:30.270828   27263 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0124 10:42:30.270952   27263 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0124 10:42:30.271005   27263 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0124 10:42:30.345504   27263 out.go:177] 
	I0124 10:42:29.671038   27980 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502233 seconds
	I0124 10:42:29.671175   27980 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0124 10:42:29.681009   27980 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0124 10:42:30.349504   27980 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0124 10:42:30.349694   27980 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-777000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0124 10:42:30.859287   27980 kubeadm.go:322] [bootstrap-token] Using token: b71asq.eb8szagjal324inq
	I0124 10:42:30.897766   27980 out.go:204]   - Configuring RBAC rules ...
	I0124 10:42:30.897863   27980 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0124 10:42:30.900712   27980 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0124 10:42:31.011898   27980 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0124 10:42:31.016904   27980 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0124 10:42:31.020968   27980 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0124 10:42:31.024502   27980 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0124 10:42:31.035681   27980 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0124 10:42:31.221377   27980 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0124 10:42:31.305204   27980 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0124 10:42:31.305789   27980 kubeadm.go:322] 
	I0124 10:42:31.305889   27980 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0124 10:42:31.305905   27980 kubeadm.go:322] 
	I0124 10:42:31.305994   27980 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0124 10:42:31.306003   27980 kubeadm.go:322] 
	I0124 10:42:31.306041   27980 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0124 10:42:31.306115   27980 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0124 10:42:31.306179   27980 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0124 10:42:31.306189   27980 kubeadm.go:322] 
	I0124 10:42:31.306255   27980 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0124 10:42:31.306273   27980 kubeadm.go:322] 
	I0124 10:42:31.306352   27980 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0124 10:42:31.306365   27980 kubeadm.go:322] 
	I0124 10:42:31.306427   27980 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0124 10:42:31.306508   27980 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0124 10:42:31.306610   27980 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0124 10:42:31.306618   27980 kubeadm.go:322] 
	I0124 10:42:31.306734   27980 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0124 10:42:31.306833   27980 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0124 10:42:31.306840   27980 kubeadm.go:322] 
	I0124 10:42:31.306916   27980 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token b71asq.eb8szagjal324inq \
	I0124 10:42:31.307009   27980 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1dd9bf742077183a524a4547f7dd01a1fa07394a69c0e8e5ff6c1023717dbaea \
	I0124 10:42:31.307030   27980 kubeadm.go:322] 	--control-plane 
	I0124 10:42:31.307036   27980 kubeadm.go:322] 
	I0124 10:42:31.307133   27980 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0124 10:42:31.307143   27980 kubeadm.go:322] 
	I0124 10:42:31.307222   27980 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token b71asq.eb8szagjal324inq \
	I0124 10:42:31.307308   27980 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1dd9bf742077183a524a4547f7dd01a1fa07394a69c0e8e5ff6c1023717dbaea 
	I0124 10:42:31.311596   27980 kubeadm.go:322] W0124 18:42:23.087473    9092 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0124 10:42:31.311785   27980 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0124 10:42:31.311903   27980 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0124 10:42:31.311921   27980 cni.go:84] Creating CNI manager for ""
	I0124 10:42:31.311941   27980 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:42:31.349013   27980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-01-24 18:34:18 UTC, end at Tue 2023-01-24 18:42:32 UTC. --
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.087677716Z" level=info msg="Daemon has completed initialization"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.111857700Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.114467184Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.115118985Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.115247831Z" level=info msg="Daemon shutdown complete"
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: docker.service: Succeeded.
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Starting Docker Application Container Engine...
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.167334866Z" level=info msg="Starting up"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.168932416Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.168971093Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.169031380Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.169044583Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170673568Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170690657Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170705796Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170718011Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.176701201Z" level=info msg="Loading containers: start."
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.255090071Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.286732777Z" level=info msg="Loading containers: done."
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.295014375Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.295077787Z" level=info msg="Daemon has completed initialization"
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Started Docker Application Container Engine.
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.319378820Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.322431114Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-24T18:42:34Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jan24 18:23] hrtimer: interrupt took 2585846 ns
	
	* 
	* ==> kernel <==
	*  18:42:34 up  1:41,  0 users,  load average: 0.81, 0.85, 1.36
	Linux old-k8s-version-115000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-01-24 18:34:18 UTC, end at Tue 2023-01-24 18:42:34 UTC. --
	Jan 24 18:42:32 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 24 18:42:33 old-k8s-version-115000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jan 24 18:42:33 old-k8s-version-115000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 24 18:42:33 old-k8s-version-115000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 24 18:42:33 old-k8s-version-115000 kubelet[14717]: I0124 18:42:33.446304   14717 server.go:410] Version: v1.16.0
	Jan 24 18:42:33 old-k8s-version-115000 kubelet[14717]: I0124 18:42:33.446495   14717 plugins.go:100] No cloud provider specified.
	Jan 24 18:42:33 old-k8s-version-115000 kubelet[14717]: I0124 18:42:33.446506   14717 server.go:773] Client rotation is on, will bootstrap in background
	Jan 24 18:42:33 old-k8s-version-115000 kubelet[14717]: I0124 18:42:33.448283   14717 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 24 18:42:33 old-k8s-version-115000 kubelet[14717]: W0124 18:42:33.449093   14717 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 24 18:42:33 old-k8s-version-115000 kubelet[14717]: W0124 18:42:33.449160   14717 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 24 18:42:33 old-k8s-version-115000 kubelet[14717]: F0124 18:42:33.449184   14717 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 24 18:42:33 old-k8s-version-115000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 24 18:42:33 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 24 18:42:34 old-k8s-version-115000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jan 24 18:42:34 old-k8s-version-115000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 24 18:42:34 old-k8s-version-115000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 24 18:42:34 old-k8s-version-115000 kubelet[14733]: I0124 18:42:34.217163   14733 server.go:410] Version: v1.16.0
	Jan 24 18:42:34 old-k8s-version-115000 kubelet[14733]: I0124 18:42:34.217320   14733 plugins.go:100] No cloud provider specified.
	Jan 24 18:42:34 old-k8s-version-115000 kubelet[14733]: I0124 18:42:34.217329   14733 server.go:773] Client rotation is on, will bootstrap in background
	Jan 24 18:42:34 old-k8s-version-115000 kubelet[14733]: I0124 18:42:34.219079   14733 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 24 18:42:34 old-k8s-version-115000 kubelet[14733]: W0124 18:42:34.219843   14733 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 24 18:42:34 old-k8s-version-115000 kubelet[14733]: W0124 18:42:34.219912   14733 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 24 18:42:34 old-k8s-version-115000 kubelet[14733]: F0124 18:42:34.219938   14733 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 24 18:42:34 old-k8s-version-115000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 24 18:42:34 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:42:34.569696   28351 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (418.388797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-115000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (498.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0124 10:42:36.600890    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:42:41.844978    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:42:45.214468    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:43:26.346183    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:43:39.819384    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:43:59.644141    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:44:37.436212    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:44:49.391961    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:45:14.463839    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:45:19.567383    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:45:43.358856    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
E0124 10:45:47.255193    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:45:56.780328    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:46:00.477057    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:46:32.020248    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:46:36.649294    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:46:37.513109    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:46:51.080852    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:47:36.602862    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:47:41.846621    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:47:45.216749    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:47:53.737483    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:47:59.700803    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:48:26.348161    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:48:39.819625    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:49:04.894729    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:49:08.273643    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:49:37.435986    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:50:19.569568    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:50:43.360383    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:51:36.651356    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:51:51.082399    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (414.543468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-115000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-115000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-115000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7",
	        "Created": "2023-01-24T18:28:39.694404231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313083,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:34:18.167062744Z",
	            "FinishedAt": "2023-01-24T18:34:15.257971971Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7-json.log",
	        "Name": "/old-k8s-version-115000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-115000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-115000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-115000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-115000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-115000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aeae45eac0d9801aed631b6f91823fc2a72eaba680eac64041de99fb28e72c64",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55502"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55498"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55501"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aeae45eac0d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-115000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b483b8467",
	                        "old-k8s-version-115000"
	                    ],
	                    "NetworkID": "5ad39a0309903e0f7f41a6f2aca4e1831033f2fa0e547dc51ca28338a4ce6eed",
	                    "EndpointID": "8a3908d14d7d1b555adf982499222c517cb6fc8a004ccb9ffc793e4d2e71600d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (418.257303ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-115000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-115000 logs -n 25: (3.474887099s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	| delete  | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	| delete  | -p                                                   | disable-driver-mounts-724000 | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	|         | disable-driver-mounts-724000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:44 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:44 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:44 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-436000     | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:44 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:49 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-783000 --memory=2200 --alsologtostderr | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:51 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-783000           | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-783000                | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-783000 --memory=2200 --alsologtostderr | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-783000 sudo                            | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	| delete  | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 10:51:26
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 10:51:26.697907   29780 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:51:26.698151   29780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:51:26.698157   29780 out.go:309] Setting ErrFile to fd 2...
	I0124 10:51:26.698161   29780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:51:26.698272   29780 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:51:26.698762   29780 out.go:303] Setting JSON to false
	I0124 10:51:26.718352   29780 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6661,"bootTime":1674579625,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 10:51:26.718479   29780 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 10:51:26.740314   29780 out.go:177] * [newest-cni-783000] minikube v1.28.0 on Darwin 13.1
	I0124 10:51:26.782081   29780 notify.go:220] Checking for updates...
	I0124 10:51:26.804054   29780 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 10:51:26.825154   29780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:51:26.846881   29780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 10:51:26.868214   29780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 10:51:26.890331   29780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 10:51:26.912130   29780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 10:51:26.934817   29780 config.go:180] Loaded profile config "newest-cni-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:51:26.935552   29780 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 10:51:26.998351   29780 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 10:51:26.998489   29780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:51:27.140103   29780 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:51:27.04748855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:51:27.161161   29780 out.go:177] * Using the docker driver based on existing profile
	I0124 10:51:27.182793   29780 start.go:296] selected driver: docker
	I0124 10:51:27.182816   29780 start.go:840] validating driver "docker" against &{Name:newest-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:51:27.182943   29780 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 10:51:27.186775   29780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:51:27.328889   29780 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:51:27.236235099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:51:27.329053   29780 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0124 10:51:27.329073   29780 cni.go:84] Creating CNI manager for ""
	I0124 10:51:27.329087   29780 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:51:27.329107   29780 start_flags.go:319] config:
	{Name:newest-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:51:27.372144   29780 out.go:177] * Starting control plane node newest-cni-783000 in cluster newest-cni-783000
	I0124 10:51:27.394959   29780 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 10:51:27.416032   29780 out.go:177] * Pulling base image ...
	I0124 10:51:27.457927   29780 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:51:27.457975   29780 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 10:51:27.457989   29780 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 10:51:27.458006   29780 cache.go:57] Caching tarball of preloaded images
	I0124 10:51:27.458131   29780 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 10:51:27.458142   29780 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0124 10:51:27.458633   29780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/config.json ...
	I0124 10:51:27.515531   29780 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 10:51:27.515562   29780 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 10:51:27.515581   29780 cache.go:193] Successfully downloaded all kic artifacts
	I0124 10:51:27.515624   29780 start.go:364] acquiring machines lock for newest-cni-783000: {Name:mk751161a7eef0d1ed5d3f7aa701e7073f3f2ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 10:51:27.515716   29780 start.go:368] acquired machines lock for "newest-cni-783000" in 73.938µs
	I0124 10:51:27.515737   29780 start.go:96] Skipping create...Using existing machine configuration
	I0124 10:51:27.515747   29780 fix.go:55] fixHost starting: 
	I0124 10:51:27.516015   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:27.573070   29780 fix.go:103] recreateIfNeeded on newest-cni-783000: state=Stopped err=<nil>
	W0124 10:51:27.573103   29780 fix.go:129] unexpected machine state, will restart: <nil>
	I0124 10:51:27.616838   29780 out.go:177] * Restarting existing docker container for "newest-cni-783000" ...
	I0124 10:51:27.637805   29780 cli_runner.go:164] Run: docker start newest-cni-783000
	I0124 10:51:27.982632   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:28.043170   29780 kic.go:426] container "newest-cni-783000" state is running.
	I0124 10:51:28.043750   29780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783000
	I0124 10:51:28.111751   29780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/config.json ...
	I0124 10:51:28.112240   29780 machine.go:88] provisioning docker machine ...
	I0124 10:51:28.112267   29780 ubuntu.go:169] provisioning hostname "newest-cni-783000"
	I0124 10:51:28.112352   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.187030   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:28.187272   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:28.187292   29780 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-783000 && echo "newest-cni-783000" | sudo tee /etc/hostname
	I0124 10:51:28.330814   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-783000
	
	I0124 10:51:28.330912   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.392757   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:28.392929   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:28.392945   29780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-783000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-783000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-783000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 10:51:28.526038   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:51:28.526062   29780 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 10:51:28.526088   29780 ubuntu.go:177] setting up certificates
	I0124 10:51:28.526099   29780 provision.go:83] configureAuth start
	I0124 10:51:28.526187   29780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783000
	I0124 10:51:28.585949   29780 provision.go:138] copyHostCerts
	I0124 10:51:28.586044   29780 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 10:51:28.586053   29780 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 10:51:28.586153   29780 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 10:51:28.586358   29780 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 10:51:28.586366   29780 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 10:51:28.586432   29780 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 10:51:28.586571   29780 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 10:51:28.586577   29780 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 10:51:28.586638   29780 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 10:51:28.586758   29780 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.newest-cni-783000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-783000]
	I0124 10:51:28.691288   29780 provision.go:172] copyRemoteCerts
	I0124 10:51:28.691348   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 10:51:28.691396   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.749448   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:28.843669   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 10:51:28.861058   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0124 10:51:28.878159   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0124 10:51:28.896543   29780 provision.go:86] duration metric: configureAuth took 370.427915ms
	I0124 10:51:28.896562   29780 ubuntu.go:193] setting minikube options for container-runtime
	I0124 10:51:28.896738   29780 config.go:180] Loaded profile config "newest-cni-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:51:28.896813   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.959473   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:28.959633   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:28.959642   29780 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 10:51:29.095618   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 10:51:29.095633   29780 ubuntu.go:71] root file system type: overlay
	I0124 10:51:29.095830   29780 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 10:51:29.095929   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.154282   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:29.154452   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:29.154501   29780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 10:51:29.298648   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 10:51:29.298739   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.356991   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:29.357143   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:29.357156   29780 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 10:51:29.496799   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:51:29.496815   29780 machine.go:91] provisioned docker machine in 1.384558859s
	I0124 10:51:29.496824   29780 start.go:300] post-start starting for "newest-cni-783000" (driver="docker")
	I0124 10:51:29.496829   29780 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 10:51:29.496908   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 10:51:29.496963   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.555421   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:29.648243   29780 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 10:51:29.652421   29780 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 10:51:29.652443   29780 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 10:51:29.652452   29780 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 10:51:29.652456   29780 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 10:51:29.652465   29780 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 10:51:29.652560   29780 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 10:51:29.652736   29780 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 10:51:29.652921   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 10:51:29.660879   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:51:29.679389   29780 start.go:303] post-start completed in 182.549959ms
	I0124 10:51:29.679481   29780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 10:51:29.679541   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.743292   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:29.836917   29780 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 10:51:29.841708   29780 fix.go:57] fixHost completed within 2.325946705s
	I0124 10:51:29.841718   29780 start.go:83] releasing machines lock for "newest-cni-783000", held for 2.325979284s
	I0124 10:51:29.841807   29780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783000
	I0124 10:51:29.899817   29780 ssh_runner.go:195] Run: cat /version.json
	I0124 10:51:29.899857   29780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0124 10:51:29.899882   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.899932   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.963136   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:29.963295   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:30.055801   29780 ssh_runner.go:195] Run: systemctl --version
	I0124 10:51:30.116715   29780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 10:51:30.121951   29780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 10:51:30.137492   29780 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 10:51:30.137614   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0124 10:51:30.145228   29780 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0124 10:51:30.158316   29780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0124 10:51:30.166040   29780 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0124 10:51:30.166053   29780 start.go:472] detecting cgroup driver to use...
	I0124 10:51:30.166069   29780 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:51:30.166250   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:51:30.180030   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0124 10:51:30.189031   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 10:51:30.197489   29780 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 10:51:30.197544   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 10:51:30.206238   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:51:30.215029   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 10:51:30.223123   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:51:30.231593   29780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 10:51:30.239353   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 10:51:30.247635   29780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 10:51:30.255062   29780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 10:51:30.262258   29780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:51:30.328323   29780 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 10:51:30.411139   29780 start.go:472] detecting cgroup driver to use...
	I0124 10:51:30.411162   29780 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:51:30.411252   29780 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 10:51:30.425822   29780 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 10:51:30.425901   29780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 10:51:30.437605   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:51:30.454407   29780 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 10:51:30.567240   29780 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 10:51:30.631470   29780 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 10:51:30.631489   29780 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 10:51:30.671558   29780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:51:30.767330   29780 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 10:51:31.011915   29780 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:51:31.081012   29780 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 10:51:31.140629   29780 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:51:31.222243   29780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:51:31.295818   29780 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 10:51:31.308780   29780 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 10:51:31.308957   29780 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 10:51:31.313790   29780 start.go:540] Will wait 60s for crictl version
	I0124 10:51:31.313834   29780 ssh_runner.go:195] Run: which crictl
	I0124 10:51:31.317787   29780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 10:51:31.436356   29780 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 10:51:31.436436   29780 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:51:31.469950   29780 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:51:31.540972   29780 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 10:51:31.541121   29780 cli_runner.go:164] Run: docker exec -t newest-cni-783000 dig +short host.docker.internal
	I0124 10:51:31.656020   29780 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:51:31.656132   29780 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:51:31.660652   29780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:51:31.670871   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:31.753123   29780 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0124 10:51:31.774243   29780 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:51:31.774403   29780 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:51:31.801272   29780 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:51:31.801290   29780 docker.go:560] Images already preloaded, skipping extraction
	I0124 10:51:31.801417   29780 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:51:31.827845   29780 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:51:31.827867   29780 cache_images.go:84] Images are preloaded, skipping loading
	I0124 10:51:31.827961   29780 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 10:51:31.899754   29780 cni.go:84] Creating CNI manager for ""
	I0124 10:51:31.899772   29780 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:51:31.899794   29780 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0124 10:51:31.899811   29780 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-783000 NodeName:newest-cni-783000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 10:51:31.900072   29780 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-783000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 10:51:31.900236   29780 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-783000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 10:51:31.900305   29780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0124 10:51:31.908733   29780 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 10:51:31.908797   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 10:51:31.917000   29780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0124 10:51:31.931285   29780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 10:51:31.944956   29780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0124 10:51:31.958964   29780 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0124 10:51:31.962868   29780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:51:31.973084   29780 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000 for IP: 192.168.67.2
	I0124 10:51:31.973107   29780 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:31.973287   29780 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 10:51:31.973357   29780 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 10:51:31.973453   29780 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/client.key
	I0124 10:51:31.973527   29780 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/apiserver.key.c7fa3a9e
	I0124 10:51:31.973579   29780 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/proxy-client.key
	I0124 10:51:31.973806   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 10:51:31.973865   29780 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 10:51:31.973876   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 10:51:31.973916   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 10:51:31.973951   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 10:51:31.973979   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 10:51:31.974051   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:51:31.974718   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 10:51:31.993524   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0124 10:51:32.011173   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 10:51:32.029217   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0124 10:51:32.046741   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 10:51:32.064315   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 10:51:32.082316   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 10:51:32.100236   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 10:51:32.118674   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 10:51:32.136465   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 10:51:32.153829   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 10:51:32.171649   29780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 10:51:32.185150   29780 ssh_runner.go:195] Run: openssl version
	I0124 10:51:32.191148   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 10:51:32.199624   29780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 10:51:32.203690   29780 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 10:51:32.203747   29780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 10:51:32.209427   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 10:51:32.217171   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 10:51:32.225866   29780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:51:32.229861   29780 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:51:32.229938   29780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:51:32.235528   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 10:51:32.243043   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 10:51:32.251161   29780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 10:51:32.255387   29780 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 10:51:32.255431   29780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 10:51:32.261131   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 10:51:32.268762   29780 kubeadm.go:401] StartCluster: {Name:newest-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:51:32.268875   29780 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:51:32.292515   29780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 10:51:32.300532   29780 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0124 10:51:32.300555   29780 kubeadm.go:633] restartCluster start
	I0124 10:51:32.300610   29780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0124 10:51:32.307700   29780 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:32.307769   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:32.366039   29780 kubeconfig.go:135] verify returned: extract IP: "newest-cni-783000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:51:32.366223   29780 kubeconfig.go:146] "newest-cni-783000" context is missing from /Users/jenkins/minikube-integration/15565-3057/kubeconfig - will repair!
	I0124 10:51:32.366549   29780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/kubeconfig: {Name:mk581b13c705409309a542f9aac4783c330d27c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:32.367909   29780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0124 10:51:32.375964   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:32.376018   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:32.385246   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:32.885361   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:32.885534   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:32.895444   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:33.385744   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:33.385858   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:33.395304   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:33.886807   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:33.887050   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:33.898305   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:34.386773   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:34.386926   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:34.398116   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:34.885706   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:34.885792   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:34.895714   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:35.385949   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:35.386031   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:35.395870   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:35.887252   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:35.887475   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:35.900067   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:36.386466   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:36.386573   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:36.396015   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:36.887434   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:36.887597   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:36.898635   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:37.387439   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:37.387683   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:37.398812   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:37.886161   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:37.886243   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:37.896301   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:38.385563   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:38.385678   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:38.396128   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:38.886407   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:38.886663   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:38.898085   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:39.386029   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:39.386094   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:39.395335   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:39.885541   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:39.885647   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:39.896643   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:40.387422   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:40.387661   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:40.398458   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:40.886091   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:40.886166   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:40.895578   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:41.387457   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:41.387706   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:41.398647   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:41.886664   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:41.886779   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:41.897184   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.386072   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:42.386145   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:42.395622   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.395633   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:42.395696   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:42.404566   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.404580   29780 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0124 10:51:42.404584   29780 kubeadm.go:1120] stopping kube-system containers ...
	I0124 10:51:42.404651   29780 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:51:42.430333   29780 docker.go:456] Stopping containers: [e3dcb510c87f 3389fb5695a0 73bf5f6fbba6 f28fd1706866 73fb4dfd4faf 9669b519e466 d3afc2c015ed a9c26c7fc75a da83f474a090 dbb107636a4d 42da5a789dd4 bb6e1851b1ae fea8f4c0ca57 87a56b7df548]
	I0124 10:51:42.430440   29780 ssh_runner.go:195] Run: docker stop e3dcb510c87f 3389fb5695a0 73bf5f6fbba6 f28fd1706866 73fb4dfd4faf 9669b519e466 d3afc2c015ed a9c26c7fc75a da83f474a090 dbb107636a4d 42da5a789dd4 bb6e1851b1ae fea8f4c0ca57 87a56b7df548
	I0124 10:51:42.458631   29780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0124 10:51:42.469725   29780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:51:42.477762   29780 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 24 18:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 24 18:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan 24 18:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 24 18:50 /etc/kubernetes/scheduler.conf
	
	I0124 10:51:42.477821   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0124 10:51:42.485419   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0124 10:51:42.493136   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0124 10:51:42.500240   29780 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.500293   29780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0124 10:51:42.507511   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0124 10:51:42.514877   29780 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.514927   29780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0124 10:51:42.522025   29780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:51:42.529456   29780 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0124 10:51:42.529471   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:42.590871   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.348119   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.488064   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.569294   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.678639   29780 api_server.go:51] waiting for apiserver process to appear ...
	I0124 10:51:43.678725   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:44.192447   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:44.691091   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:45.191333   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:45.204249   29780 api_server.go:71] duration metric: took 1.52560203s to wait for apiserver process to appear ...
	I0124 10:51:45.204272   29780 api_server.go:87] waiting for apiserver healthz status ...
	I0124 10:51:45.204286   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:47.561061   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0124 10:51:47.561081   29780 api_server.go:102] status: https://127.0.0.1:56560/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0124 10:51:48.062473   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:48.068933   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0124 10:51:48.068948   29780 api_server.go:102] status: https://127.0.0.1:56560/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:51:48.561932   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:48.567708   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0124 10:51:48.567724   29780 api_server.go:102] status: https://127.0.0.1:56560/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:51:49.063214   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:49.070781   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 200:
	ok
	I0124 10:51:49.079136   29780 api_server.go:140] control plane version: v1.26.1
	I0124 10:51:49.079151   29780 api_server.go:130] duration metric: took 3.874849884s to wait for apiserver health ...
	I0124 10:51:49.079157   29780 cni.go:84] Creating CNI manager for ""
	I0124 10:51:49.079168   29780 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:51:49.103012   29780 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0124 10:51:49.124198   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0124 10:51:49.168714   29780 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0124 10:51:49.196141   29780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0124 10:51:49.209012   29780 system_pods.go:59] 8 kube-system pods found
	I0124 10:51:49.209037   29780 system_pods.go:61] "coredns-787d4945fb-fjzwt" [22caaaf6-3474-4a1b-bcaf-c9853214930e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0124 10:51:49.209045   29780 system_pods.go:61] "etcd-newest-cni-783000" [eb99adef-1a7e-414a-b3d0-ddce87974396] Running
	I0124 10:51:49.209050   29780 system_pods.go:61] "kube-apiserver-newest-cni-783000" [7263f699-7f8a-44f5-8c28-f82ad6ab8379] Running
	I0124 10:51:49.209054   29780 system_pods.go:61] "kube-controller-manager-newest-cni-783000" [5e8ac97c-b5ee-4b7a-965c-d871f8fef1c6] Running
	I0124 10:51:49.209060   29780 system_pods.go:61] "kube-proxy-xgrfr" [df20d374-b9cf-412e-8310-68152cb2cfcf] Running
	I0124 10:51:49.209065   29780 system_pods.go:61] "kube-scheduler-newest-cni-783000" [24943f33-29f5-4197-8ceb-3d0602bc085b] Running
	I0124 10:51:49.209070   29780 system_pods.go:61] "metrics-server-7997d45854-pl5nw" [fd8dad35-f008-4d0f-917d-c71707e6c650] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0124 10:51:49.209092   29780 system_pods.go:61] "storage-provisioner" [42c87f27-355c-4845-8437-c80e6de9d89a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0124 10:51:49.209105   29780 system_pods.go:74] duration metric: took 12.9487ms to wait for pod list to return data ...
	I0124 10:51:49.209129   29780 node_conditions.go:102] verifying NodePressure condition ...
	I0124 10:51:49.213900   29780 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0124 10:51:49.213960   29780 node_conditions.go:123] node cpu capacity is 6
	I0124 10:51:49.213990   29780 node_conditions.go:105] duration metric: took 4.838246ms to run NodePressure ...
	I0124 10:51:49.214034   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:49.708211   29780 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0124 10:51:49.773656   29780 ops.go:34] apiserver oom_adj: -16
	I0124 10:51:49.773672   29780 kubeadm.go:637] restartCluster took 17.472998226s
	I0124 10:51:49.773683   29780 kubeadm.go:403] StartCluster complete in 17.504813994s
	I0124 10:51:49.773700   29780 settings.go:142] acquiring lock: {Name:mkeea169922107d4bc5deea23d2d200e61271e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:49.773787   29780 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:51:49.774510   29780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/kubeconfig: {Name:mk581b13c705409309a542f9aac4783c330d27c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:49.774828   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0124 10:51:49.774824   29780 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0124 10:51:49.774884   29780 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-783000"
	I0124 10:51:49.774887   29780 addons.go:65] Setting metrics-server=true in profile "newest-cni-783000"
	I0124 10:51:49.774896   29780 addons.go:65] Setting default-storageclass=true in profile "newest-cni-783000"
	I0124 10:51:49.774910   29780 addons.go:227] Setting addon metrics-server=true in "newest-cni-783000"
	I0124 10:51:49.774910   29780 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-783000"
	W0124 10:51:49.774919   29780 addons.go:236] addon metrics-server should already be in state true
	W0124 10:51:49.774922   29780 addons.go:236] addon storage-provisioner should already be in state true
	I0124 10:51:49.774966   29780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-783000"
	I0124 10:51:49.774973   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:49.774976   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:49.774965   29780 addons.go:65] Setting dashboard=true in profile "newest-cni-783000"
	I0124 10:51:49.775008   29780 config.go:180] Loaded profile config "newest-cni-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:51:49.775008   29780 addons.go:227] Setting addon dashboard=true in "newest-cni-783000"
	W0124 10:51:49.775024   29780 addons.go:236] addon dashboard should already be in state true
	I0124 10:51:49.775094   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:49.775381   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.775505   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.775560   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.775595   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.783820   29780 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-783000" context rescaled to 1 replicas
	I0124 10:51:49.783867   29780 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 10:51:49.806832   29780 out.go:177] * Verifying Kubernetes components...
	I0124 10:51:49.880671   29780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:51:49.916781   29780 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 10:51:49.912081   29780 addons.go:227] Setting addon default-storageclass=true in "newest-cni-783000"
	I0124 10:51:49.937772   29780 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 10:51:49.958433   29780 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W0124 10:51:49.958477   29780 addons.go:236] addon default-storageclass should already be in state true
	I0124 10:51:49.971726   29780 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0124 10:51:49.971758   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:49.979627   29780 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0124 10:51:49.979638   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0124 10:51:49.979682   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:50.021470   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0124 10:51:50.021512   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0124 10:51:50.058864   29780 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0124 10:51:50.021604   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.022234   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:50.059020   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.096897   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0124 10:51:50.096921   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0124 10:51:50.098057   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.113108   29780 api_server.go:51] waiting for apiserver process to appear ...
	I0124 10:51:50.113204   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:50.129928   29780 api_server.go:71] duration metric: took 346.024787ms to wait for apiserver process to appear ...
	I0124 10:51:50.129955   29780 api_server.go:87] waiting for apiserver healthz status ...
	I0124 10:51:50.129970   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:50.137730   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 200:
	ok
	I0124 10:51:50.140022   29780 api_server.go:140] control plane version: v1.26.1
	I0124 10:51:50.140042   29780 api_server.go:130] duration metric: took 10.0807ms to wait for apiserver health ...
	I0124 10:51:50.140058   29780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0124 10:51:50.150126   29780 system_pods.go:59] 8 kube-system pods found
	I0124 10:51:50.150150   29780 system_pods.go:61] "coredns-787d4945fb-fjzwt" [22caaaf6-3474-4a1b-bcaf-c9853214930e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0124 10:51:50.150158   29780 system_pods.go:61] "etcd-newest-cni-783000" [eb99adef-1a7e-414a-b3d0-ddce87974396] Running
	I0124 10:51:50.150169   29780 system_pods.go:61] "kube-apiserver-newest-cni-783000" [7263f699-7f8a-44f5-8c28-f82ad6ab8379] Running
	I0124 10:51:50.150183   29780 system_pods.go:61] "kube-controller-manager-newest-cni-783000" [5e8ac97c-b5ee-4b7a-965c-d871f8fef1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0124 10:51:50.150200   29780 system_pods.go:61] "kube-proxy-xgrfr" [df20d374-b9cf-412e-8310-68152cb2cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0124 10:51:50.150226   29780 system_pods.go:61] "kube-scheduler-newest-cni-783000" [24943f33-29f5-4197-8ceb-3d0602bc085b] Running
	I0124 10:51:50.150240   29780 system_pods.go:61] "metrics-server-7997d45854-pl5nw" [fd8dad35-f008-4d0f-917d-c71707e6c650] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0124 10:51:50.150250   29780 system_pods.go:61] "storage-provisioner" [42c87f27-355c-4845-8437-c80e6de9d89a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0124 10:51:50.150256   29780 system_pods.go:74] duration metric: took 10.1907ms to wait for pod list to return data ...
	I0124 10:51:50.150265   29780 default_sa.go:34] waiting for default service account to be created ...
	I0124 10:51:50.154454   29780 default_sa.go:45] found service account: "default"
	I0124 10:51:50.154470   29780 default_sa.go:55] duration metric: took 4.184192ms for default service account to be created ...
	I0124 10:51:50.154481   29780 kubeadm.go:578] duration metric: took 370.588378ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0124 10:51:50.154498   29780 node_conditions.go:102] verifying NodePressure condition ...
	I0124 10:51:50.158893   29780 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0124 10:51:50.158908   29780 node_conditions.go:123] node cpu capacity is 6
	I0124 10:51:50.158916   29780 node_conditions.go:105] duration metric: took 4.414225ms to run NodePressure ...
	I0124 10:51:50.158924   29780 start.go:226] waiting for startup goroutines ...
	I0124 10:51:50.187030   29780 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0124 10:51:50.187044   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0124 10:51:50.187178   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.190054   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.190216   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.190275   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.255341   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.324649   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0124 10:51:50.324662   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0124 10:51:50.325595   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0124 10:51:50.325604   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0124 10:51:50.327588   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 10:51:50.368473   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0124 10:51:50.368486   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0124 10:51:50.373293   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0124 10:51:50.373309   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0124 10:51:50.382034   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0124 10:51:50.390912   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0124 10:51:50.390931   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0124 10:51:50.396819   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0124 10:51:50.396836   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0124 10:51:50.469356   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0124 10:51:50.481116   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0124 10:51:50.481130   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0124 10:51:50.504627   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0124 10:51:50.504642   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0124 10:51:50.580506   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0124 10:51:50.580523   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0124 10:51:50.666988   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0124 10:51:50.667002   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0124 10:51:50.769473   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0124 10:51:50.769492   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0124 10:51:50.788780   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0124 10:51:50.788800   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0124 10:51:50.809423   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0124 10:51:51.499283   29780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.171665367s)
	I0124 10:51:51.499303   29780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117243607s)
	I0124 10:51:51.515963   29780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.046574062s)
	I0124 10:51:51.515990   29780 addons.go:457] Verifying addon metrics-server=true in "newest-cni-783000"
	I0124 10:51:51.654165   29780 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-783000 addons enable metrics-server	
	
	
	I0124 10:51:51.675440   29780 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0124 10:51:51.696033   29780 addons.go:488] enableAddons completed in 1.921176192s
	I0124 10:51:51.696534   29780 ssh_runner.go:195] Run: rm -f paused
	I0124 10:51:51.736912   29780 start.go:538] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0124 10:51:51.758360   29780 out.go:177] * Done! kubectl is now configured to use "newest-cni-783000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-01-24 18:34:18 UTC, end at Tue 2023-01-24 18:52:07 UTC. --
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.087677716Z" level=info msg="Daemon has completed initialization"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.111857700Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.114467184Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.115118985Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.115247831Z" level=info msg="Daemon shutdown complete"
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: docker.service: Succeeded.
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Starting Docker Application Container Engine...
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.167334866Z" level=info msg="Starting up"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.168932416Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.168971093Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.169031380Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.169044583Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170673568Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170690657Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170705796Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170718011Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.176701201Z" level=info msg="Loading containers: start."
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.255090071Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.286732777Z" level=info msg="Loading containers: done."
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.295014375Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.295077787Z" level=info msg="Daemon has completed initialization"
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Started Docker Application Container Engine.
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.319378820Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.322431114Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-01-24T18:52:09Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jan24 18:23] hrtimer: interrupt took 2585846 ns
	
	* 
	* ==> kernel <==
	*  18:52:09 up  1:51,  0 users,  load average: 2.10, 1.28, 1.27
	Linux old-k8s-version-115000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-01-24 18:34:18 UTC, end at Tue 2023-01-24 18:52:09 UTC. --
	Jan 24 18:52:07 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 24 18:52:08 old-k8s-version-115000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Jan 24 18:52:08 old-k8s-version-115000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 24 18:52:08 old-k8s-version-115000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 24 18:52:08 old-k8s-version-115000 kubelet[24936]: I0124 18:52:08.713009   24936 server.go:410] Version: v1.16.0
	Jan 24 18:52:08 old-k8s-version-115000 kubelet[24936]: I0124 18:52:08.713442   24936 plugins.go:100] No cloud provider specified.
	Jan 24 18:52:08 old-k8s-version-115000 kubelet[24936]: I0124 18:52:08.713493   24936 server.go:773] Client rotation is on, will bootstrap in background
	Jan 24 18:52:08 old-k8s-version-115000 kubelet[24936]: I0124 18:52:08.715542   24936 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 24 18:52:08 old-k8s-version-115000 kubelet[24936]: W0124 18:52:08.716385   24936 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 24 18:52:08 old-k8s-version-115000 kubelet[24936]: W0124 18:52:08.716457   24936 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 24 18:52:08 old-k8s-version-115000 kubelet[24936]: F0124 18:52:08.716494   24936 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 24 18:52:08 old-k8s-version-115000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 24 18:52:08 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 24 18:52:09 old-k8s-version-115000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jan 24 18:52:09 old-k8s-version-115000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 24 18:52:09 old-k8s-version-115000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 24 18:52:09 old-k8s-version-115000 kubelet[24952]: I0124 18:52:09.462625   24952 server.go:410] Version: v1.16.0
	Jan 24 18:52:09 old-k8s-version-115000 kubelet[24952]: I0124 18:52:09.462902   24952 plugins.go:100] No cloud provider specified.
	Jan 24 18:52:09 old-k8s-version-115000 kubelet[24952]: I0124 18:52:09.462938   24952 server.go:773] Client rotation is on, will bootstrap in background
	Jan 24 18:52:09 old-k8s-version-115000 kubelet[24952]: I0124 18:52:09.464683   24952 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 24 18:52:09 old-k8s-version-115000 kubelet[24952]: W0124 18:52:09.465458   24952 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 24 18:52:09 old-k8s-version-115000 kubelet[24952]: W0124 18:52:09.465533   24952 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 24 18:52:09 old-k8s-version-115000 kubelet[24952]: F0124 18:52:09.465558   24952 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 24 18:52:09 old-k8s-version-115000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 24 18:52:09 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:52:09.572385   30036 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (416.82278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-115000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:52:36.610823    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:52:41.857368    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:52:45.227368    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:52:53.745076    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:53:26.356782    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:53:39.831552    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:54:18.882460    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:18.888787    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:18.900974    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:18.923153    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:18.964991    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:19.047069    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:19.207754    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:19.529889    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:20.172173    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:21.452482    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:54:24.012795    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:54:29.134984    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:54:37.446792    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:54:39.377504    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:54:59.859854    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:55:14.476003    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:55:19.579662    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:55:40.822356    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
E0124 10:55:43.372736    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:56:32.032980    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:56:36.661610    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:56:42.628419    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:56:51.093884    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:57:02.745198    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:57:36.615003    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:57:41.860918    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:57:45.228895    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:57:53.748220    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55501/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0124 10:58:26.358304    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 10:58:39.832776    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 10:58:46.457349    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 10:59:18.884398    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 10:59:35.081440    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 10:59:37.448505    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 10:59:46.588453    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/default-k8s-diff-port-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 10:59:54.147401    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 11:00:14.477943    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 11:00:19.582403    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 11:00:39.660120    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0124 11:00:43.372547    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (406.169296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-115000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-115000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-115000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.945µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-115000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-115000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-115000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7",
	        "Created": "2023-01-24T18:28:39.694404231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313083,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-24T18:34:18.167062744Z",
	            "FinishedAt": "2023-01-24T18:34:15.257971971Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/hosts",
	        "LogPath": "/var/lib/docker/containers/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7/a86b483b846749f4d97fdbe2ec82d893df12b3f9c2715903c32559271be484d7-json.log",
	        "Name": "/old-k8s-version-115000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-115000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-115000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0-init/diff:/var/lib/docker/overlay2/9c481b6fc4eda78a970c6b00c96c9f145b5d588195952c506fee1f3a38f6b827/diff:/var/lib/docker/overlay2/47481f9bc7214dddcf62375b5df8c94540c57703b1d6d51ac990043359419126/diff:/var/lib/docker/overlay2/b598a508e6cab858c4e3e0eb9d52d91575db756297302c95c476bb85082f44e0/diff:/var/lib/docker/overlay2/eb2938493863a763d06f90aa9a8774a6cf5891162d070c28c5ea356826cb9c00/diff:/var/lib/docker/overlay2/32de44262b0214eac920d9f248eab5c56907a2e7f6aa69c29c4e46c4074a7cac/diff:/var/lib/docker/overlay2/1a295a0be202f67a8806d1a09e94b6509e529ccee06e851bf52c63e932a41598/diff:/var/lib/docker/overlay2/f31de4b92cc5bd6d7f65b450b1907917e4402f07bc4105fb25aa1f8246c4a4e5/diff:/var/lib/docker/overlay2/91ed229d1177ed82e2ad15ef48cab26e269f4aac446f236ccf7602f7812b4844/diff:/var/lib/docker/overlay2/d76c7ee0e38e12ccde73d3892d573e4aace99b016da7a10649752a4fe4fe8638/diff:/var/lib/docker/overlay2/5ded14
3e1cfe20dc794363143043800e9ddaa724457e76dfc3480bc88bdcf50b/diff:/var/lib/docker/overlay2/93a2cbd1b1d5abd2ffd6d5d72e40d5e932e3cdee4ef9b08c0ff0e340f0b82250/diff:/var/lib/docker/overlay2/bb3f5e5d2796cb20f7b19463a04326cc5e79a70f606d384a018bf2677fdc9c59/diff:/var/lib/docker/overlay2/9cbefe5a0d987b7c33fa83edf8c1c1e50920e178002ea80eff9cdf15510e9f7b/diff:/var/lib/docker/overlay2/33e82fcf8ab49ebe12677ce9de48d21aef3b5793b44b0bcabc73c3674c581010/diff:/var/lib/docker/overlay2/e11a40d223ac259cc9385f5df423a4114d92d60488a253e2c2690c0b7457a8e8/diff:/var/lib/docker/overlay2/61a6833055db19e06daf765b9f85b5e3a0b5c194642afb06034ca3aba1f55909/diff:/var/lib/docker/overlay2/ee3319bed930e23040288482eca784537f46b47463ff427584d9b2b5211797e9/diff:/var/lib/docker/overlay2/eee58dbcea38f98302a06040b8570e985d703290fd458ac7329bfa0e55bd8448/diff:/var/lib/docker/overlay2/b658fe28fcf5e1fc3d6b2e9b4b847d6121a8b983e9c7fb1985d5ab2346e98f8b/diff:/var/lib/docker/overlay2/31ea9d8f709524c49028c58c0f8a05346bb654e8f78be840f557b2bedf65a54a/diff:/var/lib/d
ocker/overlay2/3cf9440c13010ba35d99312d9b15c642407767bf1e0fc45d05a2549c249afec7/diff:/var/lib/docker/overlay2/800878867e03c9e3e3b31fd2704a00aa2ae0e3986914887bf3d409b9550ff520/diff:/var/lib/docker/overlay2/581fb56aa8110438414995ed4c4a14b9912c75de1736b4b7d23e0b8fe720ecd9/diff:/var/lib/docker/overlay2/07b86cd484da6c8fb2f4bee904f870665d71591e8dc7be5efa53e1794b81d00f/diff:/var/lib/docker/overlay2/1079786f8be32b332c09a1406a8c3309f812525af6501891ff134bf591345a8d/diff:/var/lib/docker/overlay2/aaaabdfc565926946334338a8c5990344cee39d099a4b9f5151467564c8b476e/diff:/var/lib/docker/overlay2/f39918c58fc40801faea98e8de1249f865a73650e08eab314c9d152e6a3b34b5/diff:/var/lib/docker/overlay2/974fee46992fba128bb6ec5407ff459f67134f9674349df844ad7efddb1ce197/diff:/var/lib/docker/overlay2/da1fb4192fa1a6e35f4cad8340c23595b9197c7b307dc3acbddafef7c3eabc82/diff:/var/lib/docker/overlay2/3b77e137962b80628f381627a68f75ff468d7ccae4458d96fa0579ac771218fe/diff:/var/lib/docker/overlay2/3faa717465cabff3baa6c98c646cb65ad47d063b215430893bb7f045676
3a794/diff:/var/lib/docker/overlay2/9597f4dd91d0689567f5b93707e47b90e9d9ba308488dff4c828008698d40802/diff:/var/lib/docker/overlay2/04df968177dde5eecc89152d1a182fb9b925a8092117778b87497db1e350ce61/diff:/var/lib/docker/overlay2/466c44ae57218cc92c3105e1e0070db81834724a69f8a4417391186e96073aae/diff:/var/lib/docker/overlay2/ec2a2a479f00a246da5470537471969c82985b2cfb98a4e35c9ff415d2a73141/diff:/var/lib/docker/overlay2/bce991b71b3cfd9fbda43bee1a05c1f7f5d1182ddbcf8f503851bd5874171f2b/diff:/var/lib/docker/overlay2/fe2929ef3680f5c3a0bd3eb8595ea97b8e43bba3f4c05803abf70a6c92a68114/diff:/var/lib/docker/overlay2/656654dcbc6d86ba911abe2cc555591ab82c37f6960cd0dad3551f4bf80122ee/diff:/var/lib/docker/overlay2/0e7faeb1891565534bd5d77c8619c32e7b51715e75dc04640b4ef0c3bc72a6b8/diff:/var/lib/docker/overlay2/9f0ce2e9bbc988036f4207b6b8237ba1f52855f6ee97e629aa058d0adcb5e7c4/diff:/var/lib/docker/overlay2/2bd42f9be2610b8910f7b39f5ce1792b9f608e775e6d3b39b12fef011686b5bd/diff:/var/lib/docker/overlay2/79703f59d1d8ea0485fb2360a75307a714a167
fc401d5ce68a58a9a201ea4152/diff:/var/lib/docker/overlay2/e589647f3d60f11426627ec161f2bf8878a577bc3388bb26e206fc5182f4f83c/diff:/var/lib/docker/overlay2/059bdd38117e61965c55837c7fd45d7e2825b162ea4e4cab6a656a436a7bbed6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e4380f78d315a88c5e4323101f897e468d006d2b599f317c78710e9195396e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-115000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-115000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-115000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-115000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aeae45eac0d9801aed631b6f91823fc2a72eaba680eac64041de99fb28e72c64",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55502"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55498"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55501"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aeae45eac0d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-115000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a86b483b8467",
	                        "old-k8s-version-115000"
	                    ],
	                    "NetworkID": "5ad39a0309903e0f7f41a6f2aca4e1831033f2fa0e547dc51ca28338a4ce6eed",
	                    "EndpointID": "8a3908d14d7d1b555adf982499222c517cb6fc8a004ccb9ffc793e4d2e71600d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (401.857146ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-115000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-115000 logs -n 25: (3.524467068s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	| delete  | -p embed-certs-777000                                | embed-certs-777000           | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	| delete  | -p                                                   | disable-driver-mounts-724000 | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:43 PST |
	|         | disable-driver-mounts-724000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:43 PST | 24 Jan 23 10:44 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:44 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:44 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-436000     | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:44 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:44 PST | 24 Jan 23 10:49 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-436000 | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:50 PST |
	|         | default-k8s-diff-port-436000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-783000 --memory=2200 --alsologtostderr | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:50 PST | 24 Jan 23 10:51 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-783000           | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-783000                | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-783000 --memory=2200 --alsologtostderr | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-783000 sudo                            | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	| delete  | -p newest-cni-783000                                 | newest-cni-783000            | jenkins | v1.28.0 | 24 Jan 23 10:51 PST | 24 Jan 23 10:51 PST |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 10:51:26
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 10:51:26.697907   29780 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:51:26.698151   29780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:51:26.698157   29780 out.go:309] Setting ErrFile to fd 2...
	I0124 10:51:26.698161   29780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:51:26.698272   29780 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:51:26.698762   29780 out.go:303] Setting JSON to false
	I0124 10:51:26.718352   29780 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6661,"bootTime":1674579625,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 10:51:26.718479   29780 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 10:51:26.740314   29780 out.go:177] * [newest-cni-783000] minikube v1.28.0 on Darwin 13.1
	I0124 10:51:26.782081   29780 notify.go:220] Checking for updates...
	I0124 10:51:26.804054   29780 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 10:51:26.825154   29780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:51:26.846881   29780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 10:51:26.868214   29780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 10:51:26.890331   29780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 10:51:26.912130   29780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 10:51:26.934817   29780 config.go:180] Loaded profile config "newest-cni-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:51:26.935552   29780 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 10:51:26.998351   29780 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 10:51:26.998489   29780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:51:27.140103   29780 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:51:27.04748855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:51:27.161161   29780 out.go:177] * Using the docker driver based on existing profile
	I0124 10:51:27.182793   29780 start.go:296] selected driver: docker
	I0124 10:51:27.182816   29780 start.go:840] validating driver "docker" against &{Name:newest-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:51:27.182943   29780 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 10:51:27.186775   29780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 10:51:27.328889   29780 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 18:51:27.236235099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 10:51:27.329053   29780 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0124 10:51:27.329073   29780 cni.go:84] Creating CNI manager for ""
	I0124 10:51:27.329087   29780 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:51:27.329107   29780 start_flags.go:319] config:
	{Name:newest-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:51:27.372144   29780 out.go:177] * Starting control plane node newest-cni-783000 in cluster newest-cni-783000
	I0124 10:51:27.394959   29780 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 10:51:27.416032   29780 out.go:177] * Pulling base image ...
	I0124 10:51:27.457927   29780 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:51:27.457975   29780 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 10:51:27.457989   29780 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 10:51:27.458006   29780 cache.go:57] Caching tarball of preloaded images
	I0124 10:51:27.458131   29780 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0124 10:51:27.458142   29780 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0124 10:51:27.458633   29780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/config.json ...
	I0124 10:51:27.515531   29780 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0124 10:51:27.515562   29780 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0124 10:51:27.515581   29780 cache.go:193] Successfully downloaded all kic artifacts
	I0124 10:51:27.515624   29780 start.go:364] acquiring machines lock for newest-cni-783000: {Name:mk751161a7eef0d1ed5d3f7aa701e7073f3f2ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0124 10:51:27.515716   29780 start.go:368] acquired machines lock for "newest-cni-783000" in 73.938µs
	I0124 10:51:27.515737   29780 start.go:96] Skipping create...Using existing machine configuration
	I0124 10:51:27.515747   29780 fix.go:55] fixHost starting: 
	I0124 10:51:27.516015   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:27.573070   29780 fix.go:103] recreateIfNeeded on newest-cni-783000: state=Stopped err=<nil>
	W0124 10:51:27.573103   29780 fix.go:129] unexpected machine state, will restart: <nil>
	I0124 10:51:27.616838   29780 out.go:177] * Restarting existing docker container for "newest-cni-783000" ...
	I0124 10:51:27.637805   29780 cli_runner.go:164] Run: docker start newest-cni-783000
	I0124 10:51:27.982632   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:28.043170   29780 kic.go:426] container "newest-cni-783000" state is running.
	I0124 10:51:28.043750   29780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783000
	I0124 10:51:28.111751   29780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/config.json ...
	I0124 10:51:28.112240   29780 machine.go:88] provisioning docker machine ...
	I0124 10:51:28.112267   29780 ubuntu.go:169] provisioning hostname "newest-cni-783000"
	I0124 10:51:28.112352   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.187030   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:28.187272   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:28.187292   29780 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-783000 && echo "newest-cni-783000" | sudo tee /etc/hostname
	I0124 10:51:28.330814   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-783000
	
	I0124 10:51:28.330912   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.392757   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:28.392929   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:28.392945   29780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-783000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-783000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-783000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0124 10:51:28.526038   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:51:28.526062   29780 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
	I0124 10:51:28.526088   29780 ubuntu.go:177] setting up certificates
	I0124 10:51:28.526099   29780 provision.go:83] configureAuth start
	I0124 10:51:28.526187   29780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783000
	I0124 10:51:28.585949   29780 provision.go:138] copyHostCerts
	I0124 10:51:28.586044   29780 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
	I0124 10:51:28.586053   29780 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
	I0124 10:51:28.586153   29780 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
	I0124 10:51:28.586358   29780 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
	I0124 10:51:28.586366   29780 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
	I0124 10:51:28.586432   29780 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
	I0124 10:51:28.586571   29780 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
	I0124 10:51:28.586577   29780 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
	I0124 10:51:28.586638   29780 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
	I0124 10:51:28.586758   29780 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.newest-cni-783000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-783000]
	I0124 10:51:28.691288   29780 provision.go:172] copyRemoteCerts
	I0124 10:51:28.691348   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0124 10:51:28.691396   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.749448   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:28.843669   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0124 10:51:28.861058   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0124 10:51:28.878159   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0124 10:51:28.896543   29780 provision.go:86] duration metric: configureAuth took 370.427915ms
	I0124 10:51:28.896562   29780 ubuntu.go:193] setting minikube options for container-runtime
	I0124 10:51:28.896738   29780 config.go:180] Loaded profile config "newest-cni-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:51:28.896813   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:28.959473   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:28.959633   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:28.959642   29780 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0124 10:51:29.095618   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0124 10:51:29.095633   29780 ubuntu.go:71] root file system type: overlay
	I0124 10:51:29.095830   29780 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0124 10:51:29.095929   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.154282   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:29.154452   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:29.154501   29780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0124 10:51:29.298648   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0124 10:51:29.298739   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.356991   29780 main.go:141] libmachine: Using SSH client type: native
	I0124 10:51:29.357143   29780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56561 <nil> <nil>}
	I0124 10:51:29.357156   29780 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0124 10:51:29.496799   29780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0124 10:51:29.496815   29780 machine.go:91] provisioned docker machine in 1.384558859s
	I0124 10:51:29.496824   29780 start.go:300] post-start starting for "newest-cni-783000" (driver="docker")
	I0124 10:51:29.496829   29780 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0124 10:51:29.496908   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0124 10:51:29.496963   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.555421   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:29.648243   29780 ssh_runner.go:195] Run: cat /etc/os-release
	I0124 10:51:29.652421   29780 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0124 10:51:29.652443   29780 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0124 10:51:29.652452   29780 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0124 10:51:29.652456   29780 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0124 10:51:29.652465   29780 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
	I0124 10:51:29.652560   29780 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
	I0124 10:51:29.652736   29780 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
	I0124 10:51:29.652921   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0124 10:51:29.660879   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:51:29.679389   29780 start.go:303] post-start completed in 182.549959ms
	I0124 10:51:29.679481   29780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 10:51:29.679541   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.743292   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:29.836917   29780 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0124 10:51:29.841708   29780 fix.go:57] fixHost completed within 2.325946705s
	I0124 10:51:29.841718   29780 start.go:83] releasing machines lock for "newest-cni-783000", held for 2.325979284s
	I0124 10:51:29.841807   29780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783000
	I0124 10:51:29.899817   29780 ssh_runner.go:195] Run: cat /version.json
	I0124 10:51:29.899857   29780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0124 10:51:29.899882   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.899932   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:29.963136   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:29.963295   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:30.055801   29780 ssh_runner.go:195] Run: systemctl --version
	I0124 10:51:30.116715   29780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0124 10:51:30.121951   29780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0124 10:51:30.137492   29780 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0124 10:51:30.137614   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0124 10:51:30.145228   29780 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0124 10:51:30.158316   29780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0124 10:51:30.166040   29780 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0124 10:51:30.166053   29780 start.go:472] detecting cgroup driver to use...
	I0124 10:51:30.166069   29780 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:51:30.166250   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:51:30.180030   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0124 10:51:30.189031   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0124 10:51:30.197489   29780 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0124 10:51:30.197544   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0124 10:51:30.206238   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:51:30.215029   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0124 10:51:30.223123   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0124 10:51:30.231593   29780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0124 10:51:30.239353   29780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0124 10:51:30.247635   29780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0124 10:51:30.255062   29780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0124 10:51:30.262258   29780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:51:30.328323   29780 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0124 10:51:30.411139   29780 start.go:472] detecting cgroup driver to use...
	I0124 10:51:30.411162   29780 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0124 10:51:30.411252   29780 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0124 10:51:30.425822   29780 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0124 10:51:30.425901   29780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0124 10:51:30.437605   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0124 10:51:30.454407   29780 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0124 10:51:30.567240   29780 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0124 10:51:30.631470   29780 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0124 10:51:30.631489   29780 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0124 10:51:30.671558   29780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:51:30.767330   29780 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0124 10:51:31.011915   29780 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:51:31.081012   29780 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0124 10:51:31.140629   29780 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0124 10:51:31.222243   29780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0124 10:51:31.295818   29780 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0124 10:51:31.308780   29780 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0124 10:51:31.308957   29780 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0124 10:51:31.313790   29780 start.go:540] Will wait 60s for crictl version
	I0124 10:51:31.313834   29780 ssh_runner.go:195] Run: which crictl
	I0124 10:51:31.317787   29780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0124 10:51:31.436356   29780 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0124 10:51:31.436436   29780 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:51:31.469950   29780 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0124 10:51:31.540972   29780 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0124 10:51:31.541121   29780 cli_runner.go:164] Run: docker exec -t newest-cni-783000 dig +short host.docker.internal
	I0124 10:51:31.656020   29780 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0124 10:51:31.656132   29780 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0124 10:51:31.660652   29780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:51:31.670871   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:31.753123   29780 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0124 10:51:31.774243   29780 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 10:51:31.774403   29780 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:51:31.801272   29780 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:51:31.801290   29780 docker.go:560] Images already preloaded, skipping extraction
	I0124 10:51:31.801417   29780 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0124 10:51:31.827845   29780 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.4
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/etcd:v3.3.8-0-gke.1
	registry.k8s.io/pause:test2
	
	-- /stdout --
	I0124 10:51:31.827867   29780 cache_images.go:84] Images are preloaded, skipping loading
	I0124 10:51:31.827961   29780 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0124 10:51:31.899754   29780 cni.go:84] Creating CNI manager for ""
	I0124 10:51:31.899772   29780 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:51:31.899794   29780 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0124 10:51:31.899811   29780 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-783000 NodeName:newest-cni-783000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0124 10:51:31.900072   29780 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-783000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0124 10:51:31.900236   29780 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-783000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0124 10:51:31.900305   29780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0124 10:51:31.908733   29780 binaries.go:44] Found k8s binaries, skipping transfer
	I0124 10:51:31.908797   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0124 10:51:31.917000   29780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0124 10:51:31.931285   29780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0124 10:51:31.944956   29780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0124 10:51:31.958964   29780 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0124 10:51:31.962868   29780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0124 10:51:31.973084   29780 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000 for IP: 192.168.67.2
	I0124 10:51:31.973107   29780 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:31.973287   29780 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
	I0124 10:51:31.973357   29780 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
	I0124 10:51:31.973453   29780 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/client.key
	I0124 10:51:31.973527   29780 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/apiserver.key.c7fa3a9e
	I0124 10:51:31.973579   29780 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/proxy-client.key
	I0124 10:51:31.973806   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
	W0124 10:51:31.973865   29780 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
	I0124 10:51:31.973876   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
	I0124 10:51:31.973916   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
	I0124 10:51:31.973951   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
	I0124 10:51:31.973979   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
	I0124 10:51:31.974051   29780 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
	I0124 10:51:31.974718   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0124 10:51:31.993524   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0124 10:51:32.011173   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0124 10:51:32.029217   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/newest-cni-783000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0124 10:51:32.046741   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0124 10:51:32.064315   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0124 10:51:32.082316   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0124 10:51:32.100236   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0124 10:51:32.118674   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0124 10:51:32.136465   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
	I0124 10:51:32.153829   29780 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
	I0124 10:51:32.171649   29780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0124 10:51:32.185150   29780 ssh_runner.go:195] Run: openssl version
	I0124 10:51:32.191148   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
	I0124 10:51:32.199624   29780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
	I0124 10:51:32.203690   29780 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
	I0124 10:51:32.203747   29780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
	I0124 10:51:32.209427   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
	I0124 10:51:32.217171   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0124 10:51:32.225866   29780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:51:32.229861   29780 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:51:32.229938   29780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0124 10:51:32.235528   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0124 10:51:32.243043   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
	I0124 10:51:32.251161   29780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
	I0124 10:51:32.255387   29780 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
	I0124 10:51:32.255431   29780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
	I0124 10:51:32.261131   29780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
	I0124 10:51:32.268762   29780 kubeadm.go:401] StartCluster: {Name:newest-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-783000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 10:51:32.268875   29780 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:51:32.292515   29780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0124 10:51:32.300532   29780 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0124 10:51:32.300555   29780 kubeadm.go:633] restartCluster start
	I0124 10:51:32.300610   29780 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0124 10:51:32.307700   29780 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:32.307769   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:32.366039   29780 kubeconfig.go:135] verify returned: extract IP: "newest-cni-783000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:51:32.366223   29780 kubeconfig.go:146] "newest-cni-783000" context is missing from /Users/jenkins/minikube-integration/15565-3057/kubeconfig - will repair!
	I0124 10:51:32.366549   29780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/kubeconfig: {Name:mk581b13c705409309a542f9aac4783c330d27c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:32.367909   29780 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0124 10:51:32.375964   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:32.376018   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:32.385246   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:32.885361   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:32.885534   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:32.895444   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:33.385744   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:33.385858   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:33.395304   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:33.886807   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:33.887050   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:33.898305   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:34.386773   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:34.386926   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:34.398116   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:34.885706   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:34.885792   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:34.895714   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:35.385949   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:35.386031   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:35.395870   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:35.887252   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:35.887475   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:35.900067   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:36.386466   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:36.386573   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:36.396015   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:36.887434   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:36.887597   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:36.898635   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:37.387439   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:37.387683   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:37.398812   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:37.886161   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:37.886243   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:37.896301   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:38.385563   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:38.385678   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:38.396128   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:38.886407   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:38.886663   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:38.898085   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:39.386029   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:39.386094   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:39.395335   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:39.885541   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:39.885647   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:39.896643   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:40.387422   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:40.387661   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:40.398458   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:40.886091   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:40.886166   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:40.895578   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:41.387457   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:41.387706   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:41.398647   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:41.886664   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:41.886779   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:41.897184   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.386072   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:42.386145   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:42.395622   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.395633   29780 api_server.go:165] Checking apiserver status ...
	I0124 10:51:42.395696   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0124 10:51:42.404566   29780 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.404580   29780 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0124 10:51:42.404584   29780 kubeadm.go:1120] stopping kube-system containers ...
	I0124 10:51:42.404651   29780 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0124 10:51:42.430333   29780 docker.go:456] Stopping containers: [e3dcb510c87f 3389fb5695a0 73bf5f6fbba6 f28fd1706866 73fb4dfd4faf 9669b519e466 d3afc2c015ed a9c26c7fc75a da83f474a090 dbb107636a4d 42da5a789dd4 bb6e1851b1ae fea8f4c0ca57 87a56b7df548]
	I0124 10:51:42.430440   29780 ssh_runner.go:195] Run: docker stop e3dcb510c87f 3389fb5695a0 73bf5f6fbba6 f28fd1706866 73fb4dfd4faf 9669b519e466 d3afc2c015ed a9c26c7fc75a da83f474a090 dbb107636a4d 42da5a789dd4 bb6e1851b1ae fea8f4c0ca57 87a56b7df548
	I0124 10:51:42.458631   29780 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0124 10:51:42.469725   29780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0124 10:51:42.477762   29780 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 24 18:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 24 18:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan 24 18:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 24 18:50 /etc/kubernetes/scheduler.conf
	
	I0124 10:51:42.477821   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0124 10:51:42.485419   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0124 10:51:42.493136   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0124 10:51:42.500240   29780 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.500293   29780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0124 10:51:42.507511   29780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0124 10:51:42.514877   29780 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0124 10:51:42.514927   29780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0124 10:51:42.522025   29780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0124 10:51:42.529456   29780 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0124 10:51:42.529471   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:42.590871   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.348119   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.488064   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.569294   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:43.678639   29780 api_server.go:51] waiting for apiserver process to appear ...
	I0124 10:51:43.678725   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:44.192447   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:44.691091   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:45.191333   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:45.204249   29780 api_server.go:71] duration metric: took 1.52560203s to wait for apiserver process to appear ...
	I0124 10:51:45.204272   29780 api_server.go:87] waiting for apiserver healthz status ...
	I0124 10:51:45.204286   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:47.561061   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0124 10:51:47.561081   29780 api_server.go:102] status: https://127.0.0.1:56560/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0124 10:51:48.062473   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:48.068933   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0124 10:51:48.068948   29780 api_server.go:102] status: https://127.0.0.1:56560/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:51:48.561932   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:48.567708   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0124 10:51:48.567724   29780 api_server.go:102] status: https://127.0.0.1:56560/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0124 10:51:49.063214   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:49.070781   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 200:
	ok
	I0124 10:51:49.079136   29780 api_server.go:140] control plane version: v1.26.1
	I0124 10:51:49.079151   29780 api_server.go:130] duration metric: took 3.874849884s to wait for apiserver health ...
	I0124 10:51:49.079157   29780 cni.go:84] Creating CNI manager for ""
	I0124 10:51:49.079168   29780 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 10:51:49.103012   29780 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0124 10:51:49.124198   29780 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0124 10:51:49.168714   29780 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0124 10:51:49.196141   29780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0124 10:51:49.209012   29780 system_pods.go:59] 8 kube-system pods found
	I0124 10:51:49.209037   29780 system_pods.go:61] "coredns-787d4945fb-fjzwt" [22caaaf6-3474-4a1b-bcaf-c9853214930e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0124 10:51:49.209045   29780 system_pods.go:61] "etcd-newest-cni-783000" [eb99adef-1a7e-414a-b3d0-ddce87974396] Running
	I0124 10:51:49.209050   29780 system_pods.go:61] "kube-apiserver-newest-cni-783000" [7263f699-7f8a-44f5-8c28-f82ad6ab8379] Running
	I0124 10:51:49.209054   29780 system_pods.go:61] "kube-controller-manager-newest-cni-783000" [5e8ac97c-b5ee-4b7a-965c-d871f8fef1c6] Running
	I0124 10:51:49.209060   29780 system_pods.go:61] "kube-proxy-xgrfr" [df20d374-b9cf-412e-8310-68152cb2cfcf] Running
	I0124 10:51:49.209065   29780 system_pods.go:61] "kube-scheduler-newest-cni-783000" [24943f33-29f5-4197-8ceb-3d0602bc085b] Running
	I0124 10:51:49.209070   29780 system_pods.go:61] "metrics-server-7997d45854-pl5nw" [fd8dad35-f008-4d0f-917d-c71707e6c650] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0124 10:51:49.209092   29780 system_pods.go:61] "storage-provisioner" [42c87f27-355c-4845-8437-c80e6de9d89a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0124 10:51:49.209105   29780 system_pods.go:74] duration metric: took 12.9487ms to wait for pod list to return data ...
	I0124 10:51:49.209129   29780 node_conditions.go:102] verifying NodePressure condition ...
	I0124 10:51:49.213900   29780 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0124 10:51:49.213960   29780 node_conditions.go:123] node cpu capacity is 6
	I0124 10:51:49.213990   29780 node_conditions.go:105] duration metric: took 4.838246ms to run NodePressure ...
	I0124 10:51:49.214034   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0124 10:51:49.708211   29780 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0124 10:51:49.773656   29780 ops.go:34] apiserver oom_adj: -16
	I0124 10:51:49.773672   29780 kubeadm.go:637] restartCluster took 17.472998226s
	I0124 10:51:49.773683   29780 kubeadm.go:403] StartCluster complete in 17.504813994s
	I0124 10:51:49.773700   29780 settings.go:142] acquiring lock: {Name:mkeea169922107d4bc5deea23d2d200e61271e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:49.773787   29780 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 10:51:49.774510   29780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/kubeconfig: {Name:mk581b13c705409309a542f9aac4783c330d27c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 10:51:49.774828   29780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0124 10:51:49.774824   29780 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0124 10:51:49.774884   29780 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-783000"
	I0124 10:51:49.774887   29780 addons.go:65] Setting metrics-server=true in profile "newest-cni-783000"
	I0124 10:51:49.774896   29780 addons.go:65] Setting default-storageclass=true in profile "newest-cni-783000"
	I0124 10:51:49.774910   29780 addons.go:227] Setting addon metrics-server=true in "newest-cni-783000"
	I0124 10:51:49.774910   29780 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-783000"
	W0124 10:51:49.774919   29780 addons.go:236] addon metrics-server should already be in state true
	W0124 10:51:49.774922   29780 addons.go:236] addon storage-provisioner should already be in state true
	I0124 10:51:49.774966   29780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-783000"
	I0124 10:51:49.774973   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:49.774976   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:49.774965   29780 addons.go:65] Setting dashboard=true in profile "newest-cni-783000"
	I0124 10:51:49.775008   29780 config.go:180] Loaded profile config "newest-cni-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:51:49.775008   29780 addons.go:227] Setting addon dashboard=true in "newest-cni-783000"
	W0124 10:51:49.775024   29780 addons.go:236] addon dashboard should already be in state true
	I0124 10:51:49.775094   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:49.775381   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.775505   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.775560   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.775595   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:49.783820   29780 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-783000" context rescaled to 1 replicas
	I0124 10:51:49.783867   29780 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0124 10:51:49.806832   29780 out.go:177] * Verifying Kubernetes components...
	I0124 10:51:49.880671   29780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 10:51:49.916781   29780 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0124 10:51:49.912081   29780 addons.go:227] Setting addon default-storageclass=true in "newest-cni-783000"
	I0124 10:51:49.937772   29780 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 10:51:49.958433   29780 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W0124 10:51:49.958477   29780 addons.go:236] addon default-storageclass should already be in state true
	I0124 10:51:49.971726   29780 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0124 10:51:49.971758   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:49.979627   29780 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0124 10:51:49.979638   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0124 10:51:49.979682   29780 host.go:66] Checking if "newest-cni-783000" exists ...
	I0124 10:51:50.021470   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0124 10:51:50.021512   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0124 10:51:50.058864   29780 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0124 10:51:50.021604   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.022234   29780 cli_runner.go:164] Run: docker container inspect newest-cni-783000 --format={{.State.Status}}
	I0124 10:51:50.059020   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.096897   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0124 10:51:50.096921   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0124 10:51:50.098057   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.113108   29780 api_server.go:51] waiting for apiserver process to appear ...
	I0124 10:51:50.113204   29780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 10:51:50.129928   29780 api_server.go:71] duration metric: took 346.024787ms to wait for apiserver process to appear ...
	I0124 10:51:50.129955   29780 api_server.go:87] waiting for apiserver healthz status ...
	I0124 10:51:50.129970   29780 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56560/healthz ...
	I0124 10:51:50.137730   29780 api_server.go:278] https://127.0.0.1:56560/healthz returned 200:
	ok
	I0124 10:51:50.140022   29780 api_server.go:140] control plane version: v1.26.1
	I0124 10:51:50.140042   29780 api_server.go:130] duration metric: took 10.0807ms to wait for apiserver health ...
	I0124 10:51:50.140058   29780 system_pods.go:43] waiting for kube-system pods to appear ...
	I0124 10:51:50.150126   29780 system_pods.go:59] 8 kube-system pods found
	I0124 10:51:50.150150   29780 system_pods.go:61] "coredns-787d4945fb-fjzwt" [22caaaf6-3474-4a1b-bcaf-c9853214930e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0124 10:51:50.150158   29780 system_pods.go:61] "etcd-newest-cni-783000" [eb99adef-1a7e-414a-b3d0-ddce87974396] Running
	I0124 10:51:50.150169   29780 system_pods.go:61] "kube-apiserver-newest-cni-783000" [7263f699-7f8a-44f5-8c28-f82ad6ab8379] Running
	I0124 10:51:50.150183   29780 system_pods.go:61] "kube-controller-manager-newest-cni-783000" [5e8ac97c-b5ee-4b7a-965c-d871f8fef1c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0124 10:51:50.150200   29780 system_pods.go:61] "kube-proxy-xgrfr" [df20d374-b9cf-412e-8310-68152cb2cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0124 10:51:50.150226   29780 system_pods.go:61] "kube-scheduler-newest-cni-783000" [24943f33-29f5-4197-8ceb-3d0602bc085b] Running
	I0124 10:51:50.150240   29780 system_pods.go:61] "metrics-server-7997d45854-pl5nw" [fd8dad35-f008-4d0f-917d-c71707e6c650] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0124 10:51:50.150250   29780 system_pods.go:61] "storage-provisioner" [42c87f27-355c-4845-8437-c80e6de9d89a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0124 10:51:50.150256   29780 system_pods.go:74] duration metric: took 10.1907ms to wait for pod list to return data ...
	I0124 10:51:50.150265   29780 default_sa.go:34] waiting for default service account to be created ...
	I0124 10:51:50.154454   29780 default_sa.go:45] found service account: "default"
	I0124 10:51:50.154470   29780 default_sa.go:55] duration metric: took 4.184192ms for default service account to be created ...
	I0124 10:51:50.154481   29780 kubeadm.go:578] duration metric: took 370.588378ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0124 10:51:50.154498   29780 node_conditions.go:102] verifying NodePressure condition ...
	I0124 10:51:50.158893   29780 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0124 10:51:50.158908   29780 node_conditions.go:123] node cpu capacity is 6
	I0124 10:51:50.158916   29780 node_conditions.go:105] duration metric: took 4.414225ms to run NodePressure ...
	I0124 10:51:50.158924   29780 start.go:226] waiting for startup goroutines ...
	I0124 10:51:50.187030   29780 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0124 10:51:50.187044   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0124 10:51:50.187178   29780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783000
	I0124 10:51:50.190054   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.190216   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.190275   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.255341   29780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/newest-cni-783000/id_rsa Username:docker}
	I0124 10:51:50.324649   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0124 10:51:50.324662   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0124 10:51:50.325595   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0124 10:51:50.325604   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0124 10:51:50.327588   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0124 10:51:50.368473   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0124 10:51:50.368486   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0124 10:51:50.373293   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0124 10:51:50.373309   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0124 10:51:50.382034   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0124 10:51:50.390912   29780 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0124 10:51:50.390931   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0124 10:51:50.396819   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0124 10:51:50.396836   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0124 10:51:50.469356   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0124 10:51:50.481116   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0124 10:51:50.481130   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0124 10:51:50.504627   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0124 10:51:50.504642   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0124 10:51:50.580506   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0124 10:51:50.580523   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0124 10:51:50.666988   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0124 10:51:50.667002   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0124 10:51:50.769473   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0124 10:51:50.769492   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0124 10:51:50.788780   29780 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0124 10:51:50.788800   29780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0124 10:51:50.809423   29780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0124 10:51:51.499283   29780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.171665367s)
	I0124 10:51:51.499303   29780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.117243607s)
	I0124 10:51:51.515963   29780 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.046574062s)
	I0124 10:51:51.515990   29780 addons.go:457] Verifying addon metrics-server=true in "newest-cni-783000"
	I0124 10:51:51.654165   29780 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-783000 addons enable metrics-server	
	
	
	I0124 10:51:51.675440   29780 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0124 10:51:51.696033   29780 addons.go:488] enableAddons completed in 1.921176192s
	I0124 10:51:51.696534   29780 ssh_runner.go:195] Run: rm -f paused
	I0124 10:51:51.736912   29780 start.go:538] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0124 10:51:51.758360   29780 out.go:177] * Done! kubectl is now configured to use "newest-cni-783000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-01-24 18:34:18 UTC, end at Tue 2023-01-24 19:01:22 UTC. --
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.087677716Z" level=info msg="Daemon has completed initialization"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.111857700Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.114467184Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.115118985Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[438]: time="2023-01-24T18:34:21.115247831Z" level=info msg="Daemon shutdown complete"
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: docker.service: Succeeded.
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Starting Docker Application Container Engine...
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.167334866Z" level=info msg="Starting up"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.168932416Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.168971093Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.169031380Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.169044583Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170673568Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170690657Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170705796Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.170718011Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.176701201Z" level=info msg="Loading containers: start."
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.255090071Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.286732777Z" level=info msg="Loading containers: done."
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.295014375Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.295077787Z" level=info msg="Daemon has completed initialization"
	Jan 24 18:34:21 old-k8s-version-115000 systemd[1]: Started Docker Application Container Engine.
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.319378820Z" level=info msg="API listen on [::]:2376"
	Jan 24 18:34:21 old-k8s-version-115000 dockerd[626]: time="2023-01-24T18:34:21.322431114Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-24T19:01:24Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jan24 18:23] hrtimer: interrupt took 2585846 ns
	
	* 
	* ==> kernel <==
	*  19:01:24 up  2:00,  0 users,  load average: 0.27, 0.50, 0.86
	Linux old-k8s-version-115000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-01-24 18:34:18 UTC, end at Tue 2023-01-24 19:01:24 UTC. --
	Jan 24 19:01:22 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 24 19:01:23 old-k8s-version-115000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Jan 24 19:01:23 old-k8s-version-115000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 24 19:01:23 old-k8s-version-115000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 24 19:01:23 old-k8s-version-115000 kubelet[34797]: I0124 19:01:23.725080   34797 server.go:410] Version: v1.16.0
	Jan 24 19:01:23 old-k8s-version-115000 kubelet[34797]: I0124 19:01:23.725340   34797 plugins.go:100] No cloud provider specified.
	Jan 24 19:01:23 old-k8s-version-115000 kubelet[34797]: I0124 19:01:23.725355   34797 server.go:773] Client rotation is on, will bootstrap in background
	Jan 24 19:01:23 old-k8s-version-115000 kubelet[34797]: I0124 19:01:23.727290   34797 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 24 19:01:23 old-k8s-version-115000 kubelet[34797]: W0124 19:01:23.727956   34797 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 24 19:01:23 old-k8s-version-115000 kubelet[34797]: W0124 19:01:23.728030   34797 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 24 19:01:23 old-k8s-version-115000 kubelet[34797]: F0124 19:01:23.728064   34797 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 24 19:01:23 old-k8s-version-115000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 24 19:01:23 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 24 19:01:24 old-k8s-version-115000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jan 24 19:01:24 old-k8s-version-115000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 24 19:01:24 old-k8s-version-115000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 24 19:01:24 old-k8s-version-115000 kubelet[34818]: I0124 19:01:24.470829   34818 server.go:410] Version: v1.16.0
	Jan 24 19:01:24 old-k8s-version-115000 kubelet[34818]: I0124 19:01:24.471127   34818 plugins.go:100] No cloud provider specified.
	Jan 24 19:01:24 old-k8s-version-115000 kubelet[34818]: I0124 19:01:24.471165   34818 server.go:773] Client rotation is on, will bootstrap in background
	Jan 24 19:01:24 old-k8s-version-115000 kubelet[34818]: I0124 19:01:24.472993   34818 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 24 19:01:24 old-k8s-version-115000 kubelet[34818]: W0124 19:01:24.473801   34818 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 24 19:01:24 old-k8s-version-115000 kubelet[34818]: W0124 19:01:24.473876   34818 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 24 19:01:24 old-k8s-version-115000 kubelet[34818]: F0124 19:01:24.473905   34818 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 24 19:01:24 old-k8s-version-115000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 24 19:01:24 old-k8s-version-115000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 11:01:24.467423   30624 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 2 (401.943082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-115000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.91s)

                                                
                                    

Test pass (274/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.7
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.26.1/json-events 6
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 11.98
19 TestBinaryMirror 1.7
20 TestOffline 72.75
22 TestAddons/Setup 228.48
26 TestAddons/parallel/MetricsServer 5.7
27 TestAddons/parallel/HelmTiller 13.05
29 TestAddons/parallel/CSI 46.6
30 TestAddons/parallel/Headlamp 10.48
31 TestAddons/parallel/CloudSpanner 5.5
34 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/StoppedEnableDisable 11.59
36 TestCertOptions 64.5
37 TestCertExpiration 274.13
38 TestDockerFlags 80.5
39 TestForceSystemdFlag 66.36
40 TestForceSystemdEnv 54.98
42 TestHyperKitDriverInstallOrUpdate 8.5
45 TestErrorSpam/setup 52.42
46 TestErrorSpam/start 2.34
47 TestErrorSpam/status 1.34
48 TestErrorSpam/pause 1.82
49 TestErrorSpam/unpause 1.92
50 TestErrorSpam/stop 11.59
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 70.73
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 44.23
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 6.95
62 TestFunctional/serial/CacheCmd/cache/add_local 1.75
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.81
67 TestFunctional/serial/CacheCmd/cache/delete 0.18
68 TestFunctional/serial/MinikubeKubectlCmd 0.56
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.7
70 TestFunctional/serial/ExtraConfig 43.61
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 3.14
73 TestFunctional/serial/LogsFileCmd 3.25
75 TestFunctional/parallel/ConfigCmd 0.51
76 TestFunctional/parallel/DashboardCmd 17.88
77 TestFunctional/parallel/DryRun 1.9
78 TestFunctional/parallel/InternationalLanguage 0.77
79 TestFunctional/parallel/StatusCmd 1.3
82 TestFunctional/parallel/ServiceCmd 17.69
84 TestFunctional/parallel/AddonsCmd 0.27
85 TestFunctional/parallel/PersistentVolumeClaim 26.77
87 TestFunctional/parallel/SSHCmd 0.87
88 TestFunctional/parallel/CpCmd 2.13
89 TestFunctional/parallel/MySQL 28.84
90 TestFunctional/parallel/FileSync 0.45
91 TestFunctional/parallel/CertSync 2.75
95 TestFunctional/parallel/NodeLabels 0.09
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
99 TestFunctional/parallel/License 0.44
100 TestFunctional/parallel/Version/short 0.13
101 TestFunctional/parallel/Version/components 0.74
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
106 TestFunctional/parallel/ImageCommands/ImageBuild 3.45
107 TestFunctional/parallel/ImageCommands/Setup 2.45
108 TestFunctional/parallel/DockerEnv/bash 2
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.75
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.51
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.46
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.6
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.81
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.07
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.82
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.23
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.62
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.21
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
130 TestFunctional/parallel/ProfileCmd/profile_list 0.52
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
132 TestFunctional/parallel/MountCmd/any-port 8.96
133 TestFunctional/parallel/MountCmd/specific-port 3.2
134 TestFunctional/delete_addon-resizer_images 0.16
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
140 TestImageBuild/serial/NormalBuild 2.3
141 TestImageBuild/serial/BuildWithBuildArg 0.95
142 TestImageBuild/serial/BuildWithDockerIgnore 0.48
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.42
153 TestJSONOutput/start/Command 69.62
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.67
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.64
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 10.85
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.78
178 TestKicCustomNetwork/create_custom_network 51.12
179 TestKicCustomNetwork/use_default_bridge_network 53.74
180 TestKicExistingNetwork 50.81
181 TestKicCustomSubnet 57.18
182 TestKicStaticIP 53.25
183 TestMainNoArgs 0.08
184 TestMinikubeProfile 109.19
187 TestMountStart/serial/StartWithMountFirst 8.19
188 TestMountStart/serial/VerifyMountFirst 0.4
189 TestMountStart/serial/StartWithMountSecond 7.98
190 TestMountStart/serial/VerifyMountSecond 0.47
191 TestMountStart/serial/DeleteFirst 2.14
192 TestMountStart/serial/VerifyMountPostDelete 0.4
193 TestMountStart/serial/Stop 1.58
194 TestMountStart/serial/RestartStopped 9
195 TestMountStart/serial/VerifyMountPostStop 0.4
198 TestMultiNode/serial/FreshStart2Nodes 88.44
199 TestMultiNode/serial/DeployApp2Nodes 9.46
200 TestMultiNode/serial/PingHostFrom2Pods 0.93
201 TestMultiNode/serial/AddNode 22.78
202 TestMultiNode/serial/ProfileList 0.5
203 TestMultiNode/serial/CopyFile 15.32
204 TestMultiNode/serial/StopNode 3.09
205 TestMultiNode/serial/StartAfterStop 10.78
206 TestMultiNode/serial/RestartKeepsNodes 109.15
207 TestMultiNode/serial/DeleteNode 6.23
208 TestMultiNode/serial/StopMultiNode 22.09
209 TestMultiNode/serial/RestartMultiNode 71.27
210 TestMultiNode/serial/ValidateNameConflict 57.29
214 TestPreload 115.17
216 TestScheduledStopUnix 126.86
217 TestSkaffold 85.39
219 TestInsufficientStorage 15.03
235 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.21
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.39
237 TestStoppedBinaryUpgrade/Setup 0.48
239 TestStoppedBinaryUpgrade/MinikubeLogs 3.54
241 TestPause/serial/Start 65.71
242 TestPause/serial/SecondStartNoReconfiguration 44.44
243 TestPause/serial/Pause 0.73
244 TestPause/serial/VerifyStatus 0.43
245 TestPause/serial/Unpause 0.73
246 TestPause/serial/PauseAgain 0.82
247 TestPause/serial/DeletePaused 2.73
248 TestPause/serial/VerifyDeletedResources 0.59
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
258 TestNoKubernetes/serial/StartWithK8s 54.3
259 TestNoKubernetes/serial/StartWithStopK8s 9.22
260 TestNoKubernetes/serial/Start 7.26
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.5
262 TestNoKubernetes/serial/ProfileList 2.47
263 TestNoKubernetes/serial/Stop 1.69
264 TestNoKubernetes/serial/StartNoArgs 5.57
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
266 TestNetworkPlugins/group/auto/Start 71.07
267 TestNetworkPlugins/group/flannel/Start 94.55
268 TestNetworkPlugins/group/auto/KubeletFlags 0.41
269 TestNetworkPlugins/group/auto/NetCatPod 15.22
270 TestNetworkPlugins/group/auto/DNS 0.14
271 TestNetworkPlugins/group/auto/Localhost 0.12
272 TestNetworkPlugins/group/auto/HairPin 0.14
273 TestNetworkPlugins/group/kindnet/Start 72.63
274 TestNetworkPlugins/group/flannel/ControllerPod 5.02
275 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
276 TestNetworkPlugins/group/flannel/NetCatPod 15.25
277 TestNetworkPlugins/group/flannel/DNS 0.15
278 TestNetworkPlugins/group/flannel/Localhost 0.14
279 TestNetworkPlugins/group/flannel/HairPin 0.15
280 TestNetworkPlugins/group/enable-default-cni/Start 66.52
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
282 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
283 TestNetworkPlugins/group/kindnet/NetCatPod 19.36
284 TestNetworkPlugins/group/kindnet/DNS 0.13
285 TestNetworkPlugins/group/kindnet/Localhost 0.11
286 TestNetworkPlugins/group/kindnet/HairPin 0.13
287 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.51
288 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.25
289 TestNetworkPlugins/group/bridge/Start 69.88
290 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
291 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
292 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
293 TestNetworkPlugins/group/kubenet/Start 66.25
294 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
295 TestNetworkPlugins/group/bridge/NetCatPod 15.22
296 TestNetworkPlugins/group/bridge/DNS 0.14
297 TestNetworkPlugins/group/bridge/Localhost 0.12
298 TestNetworkPlugins/group/bridge/HairPin 0.12
299 TestNetworkPlugins/group/kubenet/KubeletFlags 0.46
300 TestNetworkPlugins/group/kubenet/NetCatPod 16.26
301 TestNetworkPlugins/group/custom-flannel/Start 78.62
302 TestNetworkPlugins/group/kubenet/DNS 0.15
303 TestNetworkPlugins/group/kubenet/Localhost 0.11
304 TestNetworkPlugins/group/kubenet/HairPin 0.14
305 TestNetworkPlugins/group/calico/Start 104.67
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.2
308 TestNetworkPlugins/group/custom-flannel/DNS 0.14
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
311 TestNetworkPlugins/group/false/Start 80.37
312 TestNetworkPlugins/group/calico/ControllerPod 5.02
313 TestNetworkPlugins/group/calico/KubeletFlags 0.46
314 TestNetworkPlugins/group/calico/NetCatPod 18.27
315 TestNetworkPlugins/group/calico/DNS 0.13
316 TestNetworkPlugins/group/calico/Localhost 0.12
317 TestNetworkPlugins/group/calico/HairPin 0.11
320 TestNetworkPlugins/group/false/KubeletFlags 0.47
321 TestNetworkPlugins/group/false/NetCatPod 14.22
322 TestNetworkPlugins/group/false/DNS 0.12
323 TestNetworkPlugins/group/false/Localhost 0.11
324 TestNetworkPlugins/group/false/HairPin 0.11
326 TestStartStop/group/no-preload/serial/FirstStart 61.61
327 TestStartStop/group/no-preload/serial/DeployApp 9.28
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
329 TestStartStop/group/no-preload/serial/Stop 10.97
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.46
331 TestStartStop/group/no-preload/serial/SecondStart 308.24
334 TestStartStop/group/old-k8s-version/serial/Stop 1.58
335 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.02
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.48
340 TestStartStop/group/no-preload/serial/Pause 3.37
342 TestStartStop/group/embed-certs/serial/FirstStart 73.93
343 TestStartStop/group/embed-certs/serial/DeployApp 8.28
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
345 TestStartStop/group/embed-certs/serial/Stop 11
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.41
347 TestStartStop/group/embed-certs/serial/SecondStart 306.04
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.02
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.45
352 TestStartStop/group/embed-certs/serial/Pause 3.41
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.62
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.92
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.4
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 313.75
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.47
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.44
365 TestStartStop/group/newest-cni/serial/FirstStart 64
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
368 TestStartStop/group/newest-cni/serial/Stop 10.86
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
370 TestStartStop/group/newest-cni/serial/SecondStart 25.64
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
374 TestStartStop/group/newest-cni/serial/Pause 3.44
x
+
TestDownloadOnly/v1.16.0/json-events (18.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (18.696111258s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-412000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-412000: exit status 85 (297.676604ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-412000 | jenkins | v1.28.0 | 24 Jan 23 09:27 PST |          |
	|         | -p download-only-412000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 09:27:22
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 09:27:22.102546    4357 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:27:22.102698    4357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:27:22.102703    4357 out.go:309] Setting ErrFile to fd 2...
	I0124 09:27:22.102707    4357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:27:22.102830    4357 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	W0124 09:27:22.102933    4357 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3057/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3057/.minikube/config/config.json: no such file or directory
	I0124 09:27:22.103642    4357 out.go:303] Setting JSON to true
	I0124 09:27:22.122202    4357 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1617,"bootTime":1674579625,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 09:27:22.122284    4357 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 09:27:22.144962    4357 out.go:97] [download-only-412000] minikube v1.28.0 on Darwin 13.1
	I0124 09:27:22.145138    4357 notify.go:220] Checking for updates...
	W0124 09:27:22.145204    4357 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball: no such file or directory
	I0124 09:27:22.165541    4357 out.go:169] MINIKUBE_LOCATION=15565
	I0124 09:27:22.187070    4357 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 09:27:22.209156    4357 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 09:27:22.252009    4357 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 09:27:22.274065    4357 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	W0124 09:27:22.316837    4357 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0124 09:27:22.317228    4357 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 09:27:22.377857    4357 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 09:27:22.377978    4357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:27:22.523924    4357 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-24 17:27:22.426208505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:27:22.545139    4357 out.go:97] Using the docker driver based on user configuration
	I0124 09:27:22.545194    4357 start.go:296] selected driver: docker
	I0124 09:27:22.545230    4357 start.go:840] validating driver "docker" against <nil>
	I0124 09:27:22.545447    4357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:27:22.690658    4357 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-24 17:27:22.59813812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:27:22.690770    4357 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0124 09:27:22.694759    4357 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0124 09:27:22.694874    4357 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0124 09:27:22.715932    4357 out.go:169] Using Docker Desktop driver with root privileges
	I0124 09:27:22.737006    4357 cni.go:84] Creating CNI manager for ""
	I0124 09:27:22.737039    4357 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0124 09:27:22.737057    4357 start_flags.go:319] config:
	{Name:download-only-412000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-412000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 09:27:22.759105    4357 out.go:97] Starting control plane node download-only-412000 in cluster download-only-412000
	I0124 09:27:22.759214    4357 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 09:27:22.780706    4357 out.go:97] Pulling base image ...
	I0124 09:27:22.780790    4357 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 09:27:22.780896    4357 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 09:27:22.834654    4357 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0124 09:27:22.834808    4357 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0124 09:27:22.834826    4357 cache.go:57] Caching tarball of preloaded images
	I0124 09:27:22.834879    4357 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local cache directory
	I0124 09:27:22.834967    4357 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 09:27:22.835006    4357 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0124 09:27:22.855836    4357 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0124 09:27:22.855862    4357 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:27:22.949913    4357 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0124 09:27:25.483213    4357 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:27:25.483441    4357 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:27:26.022787    4357 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0124 09:27:26.022993    4357 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/download-only-412000/config.json ...
	I0124 09:27:26.023021    4357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/download-only-412000/config.json: {Name:mk4a66dca794e2ccdcf3d4eaea3567dd4ce918f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0124 09:27:26.023255    4357 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0124 09:27:26.023510    4357 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-412000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (6.003309103s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (6.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-412000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-412000: exit status 85 (299.495866ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-412000 | jenkins | v1.28.0 | 24 Jan 23 09:27 PST |          |
	|         | -p download-only-412000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-412000 | jenkins | v1.28.0 | 24 Jan 23 09:27 PST |          |
	|         | -p download-only-412000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/24 09:27:41
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0124 09:27:41.100736    4400 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:27:41.100893    4400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:27:41.100898    4400 out.go:309] Setting ErrFile to fd 2...
	I0124 09:27:41.100902    4400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:27:41.101008    4400 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	W0124 09:27:41.101102    4400 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3057/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3057/.minikube/config/config.json: no such file or directory
	I0124 09:27:41.101441    4400 out.go:303] Setting JSON to true
	I0124 09:27:41.119779    4400 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1636,"bootTime":1674579625,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 09:27:41.119876    4400 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 09:27:41.141897    4400 out.go:97] [download-only-412000] minikube v1.28.0 on Darwin 13.1
	I0124 09:27:41.142003    4400 notify.go:220] Checking for updates...
	I0124 09:27:41.163484    4400 out.go:169] MINIKUBE_LOCATION=15565
	I0124 09:27:41.184712    4400 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 09:27:41.205754    4400 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 09:27:41.226523    4400 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 09:27:41.247662    4400 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	W0124 09:27:41.289421    4400 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0124 09:27:41.289843    4400 config.go:180] Loaded profile config "download-only-412000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0124 09:27:41.289894    4400 start.go:748] api.Load failed for download-only-412000: filestore "download-only-412000": Docker machine "download-only-412000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0124 09:27:41.289944    4400 driver.go:365] Setting default libvirt URI to qemu:///system
	W0124 09:27:41.289965    4400 start.go:748] api.Load failed for download-only-412000: filestore "download-only-412000": Docker machine "download-only-412000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0124 09:27:41.352303    4400 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 09:27:41.352433    4400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:27:41.498989    4400 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-24 17:27:41.403160351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:27:41.520302    4400 out.go:97] Using the docker driver based on existing profile
	I0124 09:27:41.520326    4400 start.go:296] selected driver: docker
	I0124 09:27:41.520333    4400 start.go:840] validating driver "docker" against &{Name:download-only-412000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-412000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/so
cket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 09:27:41.520524    4400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:27:41.662298    4400 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-24 17:27:41.569783382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:27:41.664639    4400 cni.go:84] Creating CNI manager for ""
	I0124 09:27:41.664660    4400 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0124 09:27:41.664675    4400 start_flags.go:319] config:
	{Name:download-only-412000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-412000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticI
P:}
	I0124 09:27:41.686496    4400 out.go:97] Starting control plane node download-only-412000 in cluster download-only-412000
	I0124 09:27:41.686616    4400 cache.go:120] Beginning downloading kic base image for docker with docker
	I0124 09:27:41.708176    4400 out.go:97] Pulling base image ...
	I0124 09:27:41.708308    4400 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 09:27:41.708374    4400 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0124 09:27:41.762495    4400 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0124 09:27:41.762699    4400 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local cache directory
	I0124 09:27:41.762721    4400 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local cache directory, skipping pull
	I0124 09:27:41.762727    4400 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in cache, skipping pull
	I0124 09:27:41.762735    4400 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a as a tarball
	I0124 09:27:41.767952    4400 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 09:27:41.767986    4400 cache.go:57] Caching tarball of preloaded images
	I0124 09:27:41.768271    4400 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0124 09:27:41.790428    4400 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0124 09:27:41.790597    4400 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:27:41.882642    4400 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:44c239b3385ae5d04aaa293b94f853d9 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0124 09:27:45.659946    4400 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0124 09:27:45.660125    4400 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-412000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-412000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (11.98s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-565000 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-565000 --force --alsologtostderr --driver=docker : (10.775760392s)
helpers_test.go:175: Cleaning up "download-docker-565000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-565000
--- PASS: TestDownloadOnlyKic (11.98s)

                                                
                                    
x
+
TestBinaryMirror (1.7s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-617000 --alsologtostderr --binary-mirror http://127.0.0.1:49476 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-617000 --alsologtostderr --binary-mirror http://127.0.0.1:49476 --driver=docker : (1.081360503s)
helpers_test.go:175: Cleaning up "binary-mirror-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-617000
--- PASS: TestBinaryMirror (1.70s)

                                                
                                    
x
+
TestOffline (72.75s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-174000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-174000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m9.957072969s)
helpers_test.go:175: Cleaning up "offline-docker-174000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-174000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-174000: (2.793358016s)
--- PASS: TestOffline (72.75s)

                                                
                                    
x
+
TestAddons/Setup (228.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-709000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-709000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m48.476097784s)
--- PASS: TestAddons/Setup (228.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.153301ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-tz6f9" [61f804bd-1f86-4e93-ba7f-cf4641c42967] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010599418s
addons_test.go:380: (dbg) Run:  kubectl --context addons-709000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-709000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.707276ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-nwkg5" [4efd12be-8118-4fa7-ba03-ad541cc01f00] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009708241s
addons_test.go:438: (dbg) Run:  kubectl --context addons-709000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-709000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.524533876s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-709000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.432185ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-709000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-709000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-709000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cbb2b61b-499f-4565-a461-8952670ce1cf] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [cbb2b61b-499f-4565-a461-8952670ce1cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [cbb2b61b-499f-4565-a461-8952670ce1cf] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.019963186s
addons_test.go:549: (dbg) Run:  kubectl --context addons-709000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-709000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:419: (dbg) Run:  kubectl --context addons-709000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-709000 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-709000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-709000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-709000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-709000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8db0f2e8-5f52-4ed9-8698-506284d79ba7] Pending
helpers_test.go:344: "task-pv-pod-restore" [8db0f2e8-5f52-4ed9-8698-506284d79ba7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [8db0f2e8-5f52-4ed9-8698-506284d79ba7] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.012689465s
addons_test.go:591: (dbg) Run:  kubectl --context addons-709000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-709000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-709000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-709000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-709000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.036404488s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-709000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-709000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-709000 --alsologtostderr -v=1: (1.476077567s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-flmx6" [3c0cf977-e327-4e3c-93be-2ee50fe485ad] Pending
helpers_test.go:344: "headlamp-5759877c79-flmx6" [3c0cf977-e327-4e3c-93be-2ee50fe485ad] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-flmx6" [3c0cf977-e327-4e3c-93be-2ee50fe485ad] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.007053469s
--- PASS: TestAddons/parallel/Headlamp (10.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:344: "cloud-spanner-emulator-5dcf58dbbb-85jwl" [0e7f7cf7-7dea-4ee0-8485-66031e7463b1] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008882612s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-709000
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-709000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-709000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-709000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-709000: (11.141445999s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-709000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-709000
--- PASS: TestAddons/StoppedEnableDisable (11.59s)

                                                
                                    
x
+
TestCertOptions (64.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-539000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-539000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (1m0.897565121s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-539000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-539000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-539000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-539000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-539000: (2.720930817s)
--- PASS: TestCertOptions (64.50s)

                                                
                                    
x
+
TestCertExpiration (274.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-602000 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-602000 --memory=2048 --cert-expiration=3m --driver=docker : (58.443040423s)
E0124 10:11:50.929274    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-602000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-602000 --memory=2048 --cert-expiration=8760h --driver=docker : (33.029124252s)
helpers_test.go:175: Cleaning up "cert-expiration-602000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-602000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-602000: (2.657832846s)
--- PASS: TestCertExpiration (274.13s)

                                                
                                    
x
+
TestDockerFlags (80.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-195000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0124 10:09:53.981102    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-195000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (1m16.512812653s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-195000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-195000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-195000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-195000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-195000: (3.097468253s)
--- PASS: TestDockerFlags (80.50s)

                                                
                                    
x
+
TestForceSystemdFlag (66.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-557000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-557000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (1m3.072795818s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-557000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-557000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-557000: (2.800662813s)
--- PASS: TestForceSystemdFlag (66.36s)

                                                
                                    
x
+
TestForceSystemdEnv (54.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-663000 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-663000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (51.630711648s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-663000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-663000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-663000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-663000: (2.8357349s)
--- PASS: TestForceSystemdEnv (54.98s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.5s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.50s)

                                                
                                    
x
+
TestErrorSpam/setup (52.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-879000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-879000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 --driver=docker : (52.424601039s)
--- PASS: TestErrorSpam/setup (52.42s)

                                                
                                    
x
+
TestErrorSpam/start (2.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 start --dry-run
--- PASS: TestErrorSpam/start (2.34s)

                                                
                                    
x
+
TestErrorSpam/status (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 status
--- PASS: TestErrorSpam/status (1.34s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 unpause
--- PASS: TestErrorSpam/unpause (1.92s)

                                                
                                    
x
+
TestErrorSpam/stop (11.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 stop: (10.938300228s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-879000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-879000 stop
--- PASS: TestErrorSpam/stop (11.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/test/nested/copy/4355/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-997000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-997000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m10.725463881s)
--- PASS: TestFunctional/serial/StartWithProxy (70.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-997000 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-997000 --alsologtostderr -v=8: (44.232384449s)
functional_test.go:656: soft start took 44.232950677s for "functional-997000" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-997000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 cache add k8s.gcr.io/pause:3.1: (2.395922303s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 cache add k8s.gcr.io/pause:3.3: (2.439584908s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 cache add k8s.gcr.io/pause:latest: (2.117702279s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-997000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3718059856/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cache add minikube-local-cache-test:functional-997000
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 cache add minikube-local-cache-test:functional-997000: (1.194985757s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cache delete minikube-local-cache-test:functional-997000
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-997000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (413.11712ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 cache reload: (1.441484642s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 kubectl -- --context functional-997000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-997000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.70s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-997000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0124 09:36:50.945102    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:50.951134    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:50.961281    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:50.981525    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:51.023709    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:51.103837    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:51.264001    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:51.584417    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:52.225722    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:53.506482    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:36:56.066676    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:37:01.187188    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:37:11.429195    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-997000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.614055947s)
functional_test.go:754: restart took 43.614202195s for "functional-997000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-997000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 logs
E0124 09:37:31.909443    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 logs: (3.137366922s)
--- PASS: TestFunctional/serial/LogsCmd (3.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4049830150/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4049830150/001/logs.txt: (3.253620699s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 config get cpus: exit status 14 (63.902006ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 config get cpus: exit status 14 (63.810639ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-997000 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-997000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 7520: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-997000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-997000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (899.898993ms)

                                                
                                                
-- stdout --
	* [functional-997000] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 09:38:46.884322    7383 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:38:46.884913    7383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:38:46.884930    7383 out.go:309] Setting ErrFile to fd 2...
	I0124 09:38:46.884939    7383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:38:46.885199    7383 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 09:38:46.905944    7383 out.go:303] Setting JSON to false
	I0124 09:38:46.926438    7383 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2301,"bootTime":1674579625,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 09:38:46.926637    7383 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 09:38:46.989623    7383 out.go:177] * [functional-997000] minikube v1.28.0 on Darwin 13.1
	I0124 09:38:47.032762    7383 notify.go:220] Checking for updates...
	I0124 09:38:47.075138    7383 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 09:38:47.117285    7383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 09:38:47.138155    7383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 09:38:47.159368    7383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 09:38:47.201118    7383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 09:38:47.243314    7383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 09:38:47.265101    7383 config.go:180] Loaded profile config "functional-997000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 09:38:47.265750    7383 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 09:38:47.338028    7383 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 09:38:47.338162    7383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:38:47.518872    7383 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 17:38:47.397828402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:38:47.561128    7383 out.go:177] * Using the docker driver based on existing profile
	I0124 09:38:47.582419    7383 start.go:296] selected driver: docker
	I0124 09:38:47.582433    7383 start.go:840] validating driver "docker" against &{Name:functional-997000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-997000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 09:38:47.582537    7383 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 09:38:47.606190    7383 out.go:177] 
	W0124 09:38:47.627404    7383 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0124 09:38:47.648226    7383 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-997000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-997000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-997000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (770.249945ms)

                                                
                                                
-- stdout --
	* [functional-997000] minikube v1.28.0 sur Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 09:38:48.745233    7444 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:38:48.745397    7444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:38:48.745402    7444 out.go:309] Setting ErrFile to fd 2...
	I0124 09:38:48.745406    7444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:38:48.745567    7444 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 09:38:48.746122    7444 out.go:303] Setting JSON to false
	I0124 09:38:48.766593    7444 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2303,"bootTime":1674579625,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0124 09:38:48.766692    7444 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0124 09:38:48.787909    7444 out.go:177] * [functional-997000] minikube v1.28.0 sur Darwin 13.1
	I0124 09:38:48.809103    7444 notify.go:220] Checking for updates...
	I0124 09:38:48.830732    7444 out.go:177]   - MINIKUBE_LOCATION=15565
	I0124 09:38:48.851642    7444 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	I0124 09:38:48.893728    7444 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0124 09:38:48.935700    7444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0124 09:38:48.977493    7444 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	I0124 09:38:49.019680    7444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0124 09:38:49.041002    7444 config.go:180] Loaded profile config "functional-997000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 09:38:49.041435    7444 driver.go:365] Setting default libvirt URI to qemu:///system
	I0124 09:38:49.110056    7444 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0124 09:38:49.110202    7444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0124 09:38:49.265178    7444 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-24 17:38:49.16660694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0124 09:38:49.323858    7444 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0124 09:38:49.344831    7444 start.go:296] selected driver: docker
	I0124 09:38:49.344863    7444 start.go:840] validating driver "docker" against &{Name:functional-997000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-997000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0124 09:38:49.345029    7444 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0124 09:38:49.372873    7444 out.go:177] 
	W0124 09:38:49.394022    7444 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0124 09:38:49.414674    7444 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (17.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-997000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-997000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-nmkj7" [0cce1cbb-1726-433f-bb2b-14f8bdc3d752] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-nmkj7" [0cce1cbb-1726-433f-bb2b-14f8bdc3d752] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.010425164s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 service --namespace=default --https --url hello-node: (2.026490009s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50389
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 service hello-node --url --format={{.IP}}
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 service hello-node --url --format={{.IP}}: (2.026767808s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 service hello-node --url: (2.027315851s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50429
--- PASS: TestFunctional/parallel/ServiceCmd (17.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d006b207-6cc7-430b-8c6a-a6223a9297ca] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01095327s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-997000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-997000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-997000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-997000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [73483d69-4804-421d-949a-75a1ec215703] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [73483d69-4804-421d-949a-75a1ec215703] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [73483d69-4804-421d-949a-75a1ec215703] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00869437s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-997000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-997000 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-997000 delete -f testdata/storage-provisioner/pod.yaml: (1.029885496s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-997000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2afa0318-56e6-4c22-bad4-29775f6b4e7b] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [2afa0318-56e6-4c22-bad4-29775f6b4e7b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [2afa0318-56e6-4c22-bad4-29775f6b4e7b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00731843s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-997000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh -n functional-997000 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 cp functional-997000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd1901181976/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh -n functional-997000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-997000 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-qzw68" [cbb8ebf1-250d-4a26-9104-739d0a5af69f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-qzw68" [cbb8ebf1-250d-4a26-9104-739d0a5af69f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.050852665s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;": exit status 1 (238.159055ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;": exit status 1 (180.815056ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;": exit status 1 (114.755414ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;": exit status 1 (120.225753ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0124 09:38:12.869951    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-997000 exec mysql-888f84dd9-qzw68 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.84s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/4355/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo cat /etc/test/nested/copy/4355/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/4355.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo cat /etc/ssl/certs/4355.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/4355.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo cat /usr/share/ca-certificates/4355.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/43552.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo cat /etc/ssl/certs/43552.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/43552.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo cat /usr/share/ca-certificates/43552.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-997000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 ssh "sudo systemctl is-active crio": exit status 1 (535.299697ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-997000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:v3.3.8-0-gke.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.4
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-997000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-997000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-997000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-997000 | 07a5a40be3dd7 | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| docker.io/library/mysql                     | 5.7               | e982339a20a53 | 452MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-997000 | f1f43eb99bd38 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.9.4            | a81c2ec4e946d | 49.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/etcd                        | v3.3.8-0-gke.1    | 2a575b86cb352 | 425MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-997000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
|---------------------------------------------|-------------------|---------------|--------|
2023/01/24 09:39:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-997000 image ls --format json:
[{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50e
d43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.4"],"size":"49800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b","repoDigests":[],"repoTags":["registry.k8s.io/etcd:v3.3.8-0-gke.1"],"size":"425000000"},{"i
d":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-997000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"07a5a40be3dd73da79018653a4b02126b4045b5e24247fa652a6e6aabee9a077","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-997000"],"size":"1240000"},{"id":"f1f43eb99bd38936486851f0c6fc1322ab71c437233c9b9a1b52e7903b3a2fbb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-997000"],"size":"30"},{"id":"e982339a20a53052bd5f2b2e8438b3c95c91013f653ee781a67934cd1f9f9631","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"452000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"e6f1816883972d4be4
7bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-997000 image ls --format yaml:
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-997000
size: "32900000"
- id: 2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b
repoDigests: []
repoTags:
- registry.k8s.io/etcd:v3.3.8-0-gke.1
size: "425000000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: e982339a20a53052bd5f2b2e8438b3c95c91013f653ee781a67934cd1f9f9631
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "452000000"
- id: a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.4
size: "49800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: f1f43eb99bd38936486851f0c6fc1322ab71c437233c9b9a1b52e7903b3a2fbb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-997000
size: "30"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 ssh pgrep buildkitd: exit status 1 (394.735773ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image build -t localhost/my-image:functional-997000 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 image build -t localhost/my-image:functional-997000 testdata/build: (2.728934568s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-997000 image build -t localhost/my-image:functional-997000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 327c2f5223e7
Removing intermediate container 327c2f5223e7
---> 57a503772a3f
Step 3/3 : ADD content.txt /
---> 07a5a40be3dd
Successfully built 07a5a40be3dd
Successfully tagged localhost/my-image:functional-997000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.36883769s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-997000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-997000 docker-env) && out/minikube-darwin-amd64 status -p functional-997000"
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-997000 docker-env) && out/minikube-darwin-amd64 status -p functional-997000": (1.242452644s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-997000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image load --daemon gcr.io/google-containers/addon-resizer:functional-997000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 image load --daemon gcr.io/google-containers/addon-resizer:functional-997000: (3.364863669s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image load --daemon gcr.io/google-containers/addon-resizer:functional-997000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 image load --daemon gcr.io/google-containers/addon-resizer:functional-997000: (2.159737442s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.082837046s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-997000
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image load --daemon gcr.io/google-containers/addon-resizer:functional-997000
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 image load --daemon gcr.io/google-containers/addon-resizer:functional-997000: (4.246933202s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image save gcr.io/google-containers/addon-resizer:functional-997000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 image save gcr.io/google-containers/addon-resizer:functional-997000 /Users/jenkins/workspace/addon-resizer-save.tar: (2.07340315s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image rm gcr.io/google-containers/addon-resizer:functional-997000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.899348275s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-997000
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 image save --daemon gcr.io/google-containers/addon-resizer:functional-997000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-997000 image save --daemon gcr.io/google-containers/addon-resizer:functional-997000: (2.497337495s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-997000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-997000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-997000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f5097061-3d40-4410-950f-94d8681883ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [f5097061-3d40-4410-950f-94d8681883ca] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.061346731s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-997000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-997000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7115: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "435.069804ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "82.585959ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "432.121784ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "83.03499ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-997000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port938421797/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1674581917304684000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port938421797/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674581917304684000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port938421797/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674581917304684000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port938421797/001/test-1674581917304684000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (455.918275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 24 17:38 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 24 17:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 24 17:38 test-1674581917304684000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh cat /mount-9p/test-1674581917304684000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-997000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d81bfbdd-0820-4859-bd27-3faabe99f581] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [d81bfbdd-0820-4859-bd27-3faabe99f581] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d81bfbdd-0820-4859-bd27-3faabe99f581] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d81bfbdd-0820-4859-bd27-3faabe99f581] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.010304729s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-997000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-997000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port938421797/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-997000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2768738463/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (423.163442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-997000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2768738463/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-997000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-997000 ssh "sudo umount -f /mount-9p": exit status 1 (615.963485ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-997000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-997000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2768738463/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.20s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-997000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-997000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-997000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-730000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-730000: (2.297660129s)
--- PASS: TestImageBuild/serial/NormalBuild (2.30s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-730000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-730000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-730000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-916000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0124 09:47:45.072999    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:48:12.766242    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-916000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m9.621717964s)
--- PASS: TestJSONOutput/start/Command (69.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-916000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-916000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-916000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-916000 --output=json --user=testUser: (10.853090386s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-211000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-211000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (380.51046ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3afbca3a-f24d-46a2-96eb-e29753b40d8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-211000] minikube v1.28.0 on Darwin 13.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b76c15b-d718-4c2f-9c34-1ddecde6e44a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"867e6fd4-5795-4127-afb3-3d7ffcd68c5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig"}}
	{"specversion":"1.0","id":"2e6d4f6b-f88c-4089-a352-4698e8840923","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"b1465055-38b6-4221-99cd-6183b9805a2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f15a6fde-784a-4f2d-90c5-86ff7d144b23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube"}}
	{"specversion":"1.0","id":"43749dda-2468-42e6-b8b7-56b98480c0c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5ddfbcc1-cfab-423c-a7f8-47c603188f3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-211000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-211000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (51.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-699000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-699000 --network=: (48.32315456s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-699000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-699000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-699000: (2.733847664s)
--- PASS: TestKicCustomNetwork/create_custom_network (51.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (53.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-776000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-776000 --network=bridge: (51.15758139s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-776000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-776000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-776000: (2.523036726s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (53.74s)

                                                
                                    
x
+
TestKicExistingNetwork (50.81s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-521000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-521000 --network=existing-network: (47.877216412s)
helpers_test.go:175: Cleaning up "existing-network-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-521000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-521000: (2.578452056s)
--- PASS: TestKicExistingNetwork (50.81s)

                                                
                                    
x
+
TestKicCustomSubnet (57.18s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-026000 --subnet=192.168.60.0/24
E0124 09:51:50.951103    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-026000 --subnet=192.168.60.0/24: (54.430884376s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-026000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-026000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-026000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-026000: (2.694484959s)
--- PASS: TestKicCustomSubnet (57.18s)

                                                
                                    
x
+
TestKicStaticIP (53.25s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-355000 --static-ip=192.168.200.200
E0124 09:52:45.086423    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-355000 --static-ip=192.168.200.200: (50.279858326s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-355000 ip
helpers_test.go:175: Cleaning up "static-ip-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-355000
E0124 09:53:13.999727    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-355000: (2.715039557s)
--- PASS: TestKicStaticIP (53.25s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (109.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-147000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-147000 --driver=docker : (49.115093047s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-148000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-148000 --driver=docker : (52.887238992s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-147000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-148000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-148000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-148000: (2.660645111s)
helpers_test.go:175: Cleaning up "first-147000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-147000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-147000: (2.650088249s)
--- PASS: TestMinikubeProfile (109.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-546000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-546000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.186929501s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-546000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-559000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-559000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.97449886s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-559000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.47s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-546000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-546000 --alsologtostderr -v=5: (2.136119506s)
--- PASS: TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-559000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-559000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-559000: (1.579360564s)
--- PASS: TestMountStart/serial/Stop (1.58s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-559000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-559000: (8.002800469s)
--- PASS: TestMountStart/serial/RestartStopped (9.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-559000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (88.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-281000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0124 09:56:50.945796    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-281000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m27.721026974s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (88.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-281000 -- rollout status deployment/busybox: (7.624846123s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-qc7z4 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-x84zx -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-qc7z4 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-x84zx -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-qc7z4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-x84zx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-qc7z4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-qc7z4 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-x84zx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-281000 -- exec busybox-6b86dd6d48-x84zx -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-281000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-281000 -v 3 --alsologtostderr: (21.665561207s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr: (1.11241458s)
--- PASS: TestMultiNode/serial/AddNode (22.78s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-281000 status --output json --alsologtostderr: (1.04316953s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp testdata/cp-test.txt multinode-281000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile78235782/001/cp-test_multinode-281000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000:/home/docker/cp-test.txt multinode-281000-m02:/home/docker/cp-test_multinode-281000_multinode-281000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m02 "sudo cat /home/docker/cp-test_multinode-281000_multinode-281000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000:/home/docker/cp-test.txt multinode-281000-m03:/home/docker/cp-test_multinode-281000_multinode-281000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m03 "sudo cat /home/docker/cp-test_multinode-281000_multinode-281000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp testdata/cp-test.txt multinode-281000-m02:/home/docker/cp-test.txt
E0124 09:57:45.081161    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile78235782/001/cp-test_multinode-281000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000-m02:/home/docker/cp-test.txt multinode-281000:/home/docker/cp-test_multinode-281000-m02_multinode-281000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000 "sudo cat /home/docker/cp-test_multinode-281000-m02_multinode-281000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000-m02:/home/docker/cp-test.txt multinode-281000-m03:/home/docker/cp-test_multinode-281000-m02_multinode-281000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m03 "sudo cat /home/docker/cp-test_multinode-281000-m02_multinode-281000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp testdata/cp-test.txt multinode-281000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile78235782/001/cp-test_multinode-281000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000-m03:/home/docker/cp-test.txt multinode-281000:/home/docker/cp-test_multinode-281000-m03_multinode-281000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000 "sudo cat /home/docker/cp-test_multinode-281000-m03_multinode-281000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 cp multinode-281000-m03:/home/docker/cp-test.txt multinode-281000-m02:/home/docker/cp-test_multinode-281000-m03_multinode-281000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 ssh -n multinode-281000-m02 "sudo cat /home/docker/cp-test_multinode-281000-m03_multinode-281000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-281000 node stop m03: (1.527426893s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-281000 status: exit status 7 (784.539413ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-281000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-281000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr: exit status 7 (774.931169ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-281000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-281000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 09:57:56.831495   12435 out.go:296] Setting OutFile to fd 1 ...
	I0124 09:57:56.831657   12435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:57:56.831663   12435 out.go:309] Setting ErrFile to fd 2...
	I0124 09:57:56.831669   12435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 09:57:56.831782   12435 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 09:57:56.831961   12435 out.go:303] Setting JSON to false
	I0124 09:57:56.831985   12435 mustload.go:65] Loading cluster: multinode-281000
	I0124 09:57:56.832020   12435 notify.go:220] Checking for updates...
	I0124 09:57:56.832265   12435 config.go:180] Loaded profile config "multinode-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 09:57:56.832279   12435 status.go:255] checking status of multinode-281000 ...
	I0124 09:57:56.832675   12435 cli_runner.go:164] Run: docker container inspect multinode-281000 --format={{.State.Status}}
	I0124 09:57:56.893237   12435 status.go:330] multinode-281000 host status = "Running" (err=<nil>)
	I0124 09:57:56.893267   12435 host.go:66] Checking if "multinode-281000" exists ...
	I0124 09:57:56.893517   12435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-281000
	I0124 09:57:56.953902   12435 host.go:66] Checking if "multinode-281000" exists ...
	I0124 09:57:56.954162   12435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 09:57:56.954226   12435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-281000
	I0124 09:57:57.012181   12435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51435 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/multinode-281000/id_rsa Username:docker}
	I0124 09:57:57.104539   12435 ssh_runner.go:195] Run: systemctl --version
	I0124 09:57:57.109194   12435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 09:57:57.118634   12435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-281000
	I0124 09:57:57.178022   12435 kubeconfig.go:92] found "multinode-281000" server: "https://127.0.0.1:51439"
	I0124 09:57:57.178049   12435 api_server.go:165] Checking apiserver status ...
	I0124 09:57:57.178090   12435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0124 09:57:57.188541   12435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2391/cgroup
	W0124 09:57:57.196983   12435 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2391/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0124 09:57:57.197048   12435 ssh_runner.go:195] Run: ls
	I0124 09:57:57.200997   12435 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51439/healthz ...
	I0124 09:57:57.206217   12435 api_server.go:278] https://127.0.0.1:51439/healthz returned 200:
	ok
	I0124 09:57:57.206230   12435 status.go:421] multinode-281000 apiserver status = Running (err=<nil>)
	I0124 09:57:57.206240   12435 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0124 09:57:57.206258   12435 status.go:255] checking status of multinode-281000-m02 ...
	I0124 09:57:57.206511   12435 cli_runner.go:164] Run: docker container inspect multinode-281000-m02 --format={{.State.Status}}
	I0124 09:57:57.266464   12435 status.go:330] multinode-281000-m02 host status = "Running" (err=<nil>)
	I0124 09:57:57.266486   12435 host.go:66] Checking if "multinode-281000-m02" exists ...
	I0124 09:57:57.266744   12435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-281000-m02
	I0124 09:57:57.326233   12435 host.go:66] Checking if "multinode-281000-m02" exists ...
	I0124 09:57:57.326492   12435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0124 09:57:57.326547   12435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-281000-m02
	I0124 09:57:57.385655   12435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/multinode-281000-m02/id_rsa Username:docker}
	I0124 09:57:57.478109   12435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0124 09:57:57.487590   12435 status.go:257] multinode-281000-m02 status: &{Name:multinode-281000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0124 09:57:57.487613   12435 status.go:255] checking status of multinode-281000-m03 ...
	I0124 09:57:57.487867   12435 cli_runner.go:164] Run: docker container inspect multinode-281000-m03 --format={{.State.Status}}
	I0124 09:57:57.547453   12435 status.go:330] multinode-281000-m03 host status = "Stopped" (err=<nil>)
	I0124 09:57:57.547475   12435 status.go:343] host is not running, skipping remaining checks
	I0124 09:57:57.547483   12435 status.go:257] multinode-281000-m03 status: &{Name:multinode-281000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-281000 node start m03 --alsologtostderr: (9.667729169s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-281000 status: (1.001472656s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (109.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-281000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-281000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-281000: (23.057331427s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr
E0124 09:59:08.132584    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr: (1m25.975645582s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-281000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (109.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-281000 node delete m03: (5.319148722s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-281000 stop: (21.679282445s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-281000 status: exit status 7 (172.620864ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-281000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr: exit status 7 (235.53796ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-281000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0124 10:00:25.627682   13036 out.go:296] Setting OutFile to fd 1 ...
	I0124 10:00:25.627958   13036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:00:25.627963   13036 out.go:309] Setting ErrFile to fd 2...
	I0124 10:00:25.627967   13036 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0124 10:00:25.628073   13036 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
	I0124 10:00:25.628290   13036 out.go:303] Setting JSON to false
	I0124 10:00:25.628333   13036 mustload.go:65] Loading cluster: multinode-281000
	I0124 10:00:25.628408   13036 notify.go:220] Checking for updates...
	I0124 10:00:25.628658   13036 config.go:180] Loaded profile config "multinode-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0124 10:00:25.628671   13036 status.go:255] checking status of multinode-281000 ...
	I0124 10:00:25.629039   13036 cli_runner.go:164] Run: docker container inspect multinode-281000 --format={{.State.Status}}
	I0124 10:00:25.685893   13036 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0124 10:00:25.685911   13036 status.go:343] host is not running, skipping remaining checks
	I0124 10:00:25.685918   13036 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0124 10:00:25.685938   13036 status.go:255] checking status of multinode-281000-m02 ...
	I0124 10:00:25.686173   13036 cli_runner.go:164] Run: docker container inspect multinode-281000-m02 --format={{.State.Status}}
	I0124 10:00:25.806226   13036 status.go:330] multinode-281000-m02 host status = "Stopped" (err=<nil>)
	I0124 10:00:25.806245   13036 status.go:343] host is not running, skipping remaining checks
	I0124 10:00:25.806250   13036 status.go:257] multinode-281000-m02 status: &{Name:multinode-281000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (71.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m10.354661005s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-281000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (71.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (57.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-281000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-281000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-281000-m02 --driver=docker : exit status 14 (738.879785ms)

                                                
                                                
-- stdout --
	* [multinode-281000-m02] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-281000-m02' is duplicated with machine name 'multinode-281000-m02' in profile 'multinode-281000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-281000-m03 --driver=docker 
E0124 10:01:50.940236    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-281000-m03 --driver=docker : (53.277000768s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-281000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-281000: exit status 80 (485.6938ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-281000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-281000-m03 already exists in multinode-281000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-281000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-281000-m03: (2.725521783s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (57.29s)

                                                
                                    
x
+
TestPreload (115.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-222000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0124 10:02:45.076455    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-222000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m0.542403094s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-222000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-222000 -- docker pull gcr.io/k8s-minikube/busybox: (2.137346584s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-222000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-222000: (10.943693324s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-222000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-222000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (38.422806677s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-222000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-222000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-222000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-222000: (2.690014353s)
--- PASS: TestPreload (115.17s)

                                                
                                    
x
+
TestScheduledStopUnix (126.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-588000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-588000 --memory=2048 --driver=docker : (52.409452206s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-588000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-588000 -n scheduled-stop-588000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-588000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-588000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-588000 -n scheduled-stop-588000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-588000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-588000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-588000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-588000: exit status 7 (115.388854ms)

                                                
                                                
-- stdout --
	scheduled-stop-588000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-588000 -n scheduled-stop-588000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-588000 -n scheduled-stop-588000: exit status 7 (113.285599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-588000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-588000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-588000: (2.402033269s)
--- PASS: TestScheduledStopUnix (126.86s)

                                                
                                    
x
+
TestSkaffold (85.39s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2120731359 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-721000 --memory=2600 --driver=docker 
E0124 10:06:50.934143    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-721000 --memory=2600 --driver=docker : (52.319803472s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2120731359 run --minikube-profile skaffold-721000 --kube-context skaffold-721000 --status-check=true --port-forward=false --interactive=false
E0124 10:07:45.069197    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2120731359 run --minikube-profile skaffold-721000 --kube-context skaffold-721000 --status-check=true --port-forward=false --interactive=false: (18.215877189s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-76c6bdb46b-b4zs6" [fd039ea6-0dd3-4793-857e-1e86a02fef73] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012680951s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6b9894c79d-vnnz4" [f294df83-da1c-46cb-95f5-06a861b2ee1b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007740855s
helpers_test.go:175: Cleaning up "skaffold-721000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-721000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-721000: (2.947483352s)
--- PASS: TestSkaffold (85.39s)

                                                
                                    
x
+
TestInsufficientStorage (15.03s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-400000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-400000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.834989205s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"435f795a-1ccf-4b23-b82b-382329912384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-400000] minikube v1.28.0 on Darwin 13.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9de47397-9dba-430a-96bc-1807d079352c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"fa3579bf-71d9-4ad1-8289-dae5ae505fc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig"}}
	{"specversion":"1.0","id":"69acc43b-e6d6-42d7-9014-adb801c6afc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"436635b4-0f79-4fa1-94fa-6980d8a98843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"57ddd87d-dc88-4acc-bcfd-362459ab483a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube"}}
	{"specversion":"1.0","id":"1868b42e-944e-4d02-be8c-4c0f66a08363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a740de94-e907-44c2-9efa-9b2aa8fe5a25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e8402790-3717-4dbd-b866-bd90bc7d9882","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b62845d7-87e7-4f17-83ec-dded7ed57cf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2657ab4-7c46-474d-b481-3d2ee52bf305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"e6dc3793-0e6c-4520-b37b-0da06a4c0b11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-400000 in cluster insufficient-storage-400000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb6310a7-f723-4ef7-964f-8887d15f7088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"111dbdd8-4788-41cb-aeb2-1b01b3994629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"51dd4c89-32a2-4fbd-941a-0bb1dba65fbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-400000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-400000 --output=json --layout=cluster: exit status 7 (401.096857ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-400000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-400000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:08:18.783911   15299 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-400000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-400000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-400000 --output=json --layout=cluster: exit status 7 (404.767038ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-400000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-400000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0124 10:08:19.189041   15309 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-400000" does not appear in /Users/jenkins/minikube-integration/15565-3057/kubeconfig
	E0124 10:08:19.198196   15309 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/insufficient-storage-400000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-400000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-400000: (2.387913925s)
--- PASS: TestInsufficientStorage (15.03s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1811551081/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1811551081/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1811551081/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1811551081/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.21s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.39s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2218154028/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2218154028/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2218154028/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2218154028/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-419000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-419000: (3.540764672s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                    
x
+
TestPause/serial/Start (65.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-324000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0124 10:16:51.037744    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-324000 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m5.706183275s)
--- PASS: TestPause/serial/Start (65.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-324000 --alsologtostderr -v=1 --driver=docker 
E0124 10:17:45.174823    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 10:17:53.692630    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-324000 --alsologtostderr -v=1 --driver=docker : (44.425190194s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.44s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-324000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-324000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-324000 --output=json --layout=cluster: exit status 2 (428.079043ms)

                                                
                                                
-- stdout --
	{"Name":"pause-324000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-324000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-324000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-324000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.73s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-324000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-324000 --alsologtostderr -v=5: (2.730606619s)
--- PASS: TestPause/serial/DeletePaused (2.73s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-324000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-324000: exit status 1 (55.077255ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-324000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-391000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-391000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (404.708738ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-391000] minikube v1.28.0 on Darwin 13.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (54.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-391000 --driver=docker 
E0124 10:18:21.376770    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-391000 --driver=docker : (53.858654824s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-391000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (54.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-391000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-391000 --no-kubernetes --driver=docker : (6.164124775s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-391000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-391000 status -o json: exit status 2 (445.630912ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-391000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-391000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-391000: (2.605481905s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-391000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-391000 --no-kubernetes --driver=docker : (7.259048618s)
--- PASS: TestNoKubernetes/serial/Start (7.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-391000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-391000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (501.506507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (1.299155565s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (1.166969162s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-391000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-391000: (1.685757048s)
--- PASS: TestNoKubernetes/serial/Stop (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-391000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-391000 --driver=docker : (5.56700657s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-391000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-391000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.90913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (1m11.072932591s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (94.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (1m34.554466205s)
--- PASS: TestNetworkPlugins/group/flannel/Start (94.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2jsqv" [9eab873c-7d8d-4628-85bc-13c2322cec29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-2jsqv" [9eab873c-7d8d-4628-85bc-13c2322cec29] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.008848759s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (1m12.628518593s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jdxm6" [f5e69fd7-1d2f-4521-ab58-715c45f0ef55] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.014467398s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-q6jhr" [93141b07-d405-48fe-aedb-0c5d7b66e952] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-q6jhr" [93141b07-d405-48fe-aedb-0c5d7b66e952] Running
E0124 10:21:51.039339    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.008549432s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (1m6.517205261s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-69zpm" [536a055a-b33f-48dc-9a0c-390e063db253] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023710308s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (19.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5xbgj" [7454fa9f-e03f-4e3a-90f8-c5154729edc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0124 10:22:45.175937    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 10:22:53.695589    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-5xbgj" [7454fa9f-e03f-4e3a-90f8-c5154729edc1] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.008118205s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (19.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-46dn9" [2fc69edd-64cb-4d88-9503-425f03d6028d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:344: "netcat-694fc96674-46dn9" [2fc69edd-64cb-4d88-9503-425f03d6028d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.011126127s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (1m9.880942265s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (66.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (1m6.248980938s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (66.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-c8mkx" [a9a38095-5ffb-4aa0-8ebf-2ab0ed2a7e9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-c8mkx" [a9a38095-5ffb-4aa0-8ebf-2ab0ed2a7e9a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.010289671s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-129000 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-7llkc" [cd1083ce-5587-45fc-b2eb-c1c2b38a1833] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-7llkc" [cd1083ce-5587-45fc-b2eb-c1c2b38a1833] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 16.013244635s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (1m18.616739024s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0124 10:26:03.834786    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
E0124 10:26:24.316605    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
E0124 10:26:31.982185    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:31.987632    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:31.999390    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:32.020150    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:32.060402    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:32.140858    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:32.302970    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:32.623061    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:33.263585    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:34.093665    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 10:26:34.544245    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m44.669969399s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7tlv9" [cad79c8c-014a-4b2f-8bec-1517ea3a6260] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0124 10:26:37.104431    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:26:42.224623    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-7tlv9" [cad79c8c-014a-4b2f-8bec-1517ea3a6260] Running
E0124 10:26:51.042061    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.010735179s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
E0124 10:27:36.562214    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:36.567592    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:36.577748    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:36.599821    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:36.641447    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:36.721782    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:36.882421    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:37.202688    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:37.842813    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:39.122989    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:27:41.683742    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-129000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (1m20.374133784s)
--- PASS: TestNetworkPlugins/group/false/Start (80.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7rssq" [5f31dfa1-d339-4d6b-897d-7ac3dc663052] Running
E0124 10:27:45.177119    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 10:27:46.803977    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018887313s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (18.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-chjqn" [7adc46bd-d784-42c1-b5f1-f6bbbd84dd64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0124 10:27:53.696353    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:27:53.907700    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:27:57.044285    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-chjqn" [7adc46bd-d784-42c1-b5f1-f6bbbd84dd64] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 18.012003391s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (18.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-129000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-129000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-v8sv2" [c5cf40f9-c397-496c-8925-7c06fa7a49af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0124 10:28:46.791828    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-v8sv2" [c5cf40f9-c397-496c-8925-7c06fa7a49af] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.010935997s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-129000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-129000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-307000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0124 10:29:37.396870    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:37.403165    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:37.413476    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:37.433591    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:37.473814    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:37.553948    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:37.714060    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:38.034187    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:38.674362    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:39.954452    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:42.514655    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:47.635063    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:29:48.232943    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
E0124 10:29:57.876013    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:30:14.426613    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:14.432671    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:14.442753    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:14.462886    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:14.505046    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:14.586060    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:14.746291    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:15.066664    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:15.706868    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:16.987360    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:18.356378    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-307000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (1m1.607715113s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-307000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8128fb1-bd5b-4213-8894-0596256e7f8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0124 10:30:19.547548    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:20.406513    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e8128fb1-bd5b-4213-8894-0596256e7f8d] Running
E0124 10:30:24.667737    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.013955983s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-307000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-307000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-307000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-307000 --alsologtostderr -v=3
E0124 10:30:34.910221    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-307000 --alsologtostderr -v=3: (10.968118473s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-307000 -n no-preload-307000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-307000 -n no-preload-307000: exit status 7 (112.955441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-307000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (308.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-307000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0124 10:30:43.321115    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
E0124 10:30:55.391286    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:30:59.318031    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:31:10.153725    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
E0124 10:31:11.042591    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
E0124 10:31:31.982222    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:31:36.352283    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:31:36.610292    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:36.615981    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:36.626420    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:36.646567    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:36.688745    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:36.770962    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:36.931276    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:37.251550    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:37.893329    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:39.174098    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:41.734254    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:46.856443    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:51.043250    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 10:31:57.096659    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:31:59.670634    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:32:17.578815    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:32:21.239176    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:32:28.233988    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 10:32:36.564202    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:32:41.809870    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:41.815307    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:41.825859    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:41.847118    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:41.887467    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:41.969241    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:42.129331    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:42.449742    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:32:43.089939    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-307000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (5m7.685861918s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-307000 -n no-preload-307000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (308.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-115000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-115000 --alsologtostderr -v=3: (1.58280664s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-115000 -n old-k8s-version-115000: exit status 7 (115.945075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-115000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-7m8ww" [e51c3dde-0537-4da4-84ba-dd556b45ea13] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-7m8ww" [e51c3dde-0537-4da4-84ba-dd556b45ea13] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.015151886s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-7m8ww" [e51c3dde-0537-4da4-84ba-dd556b45ea13] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007162689s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-307000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-307000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-307000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-307000 -n no-preload-307000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-307000 -n no-preload-307000: exit status 2 (439.344751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-307000 -n no-preload-307000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-307000 -n no-preload-307000: exit status 2 (435.947491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-307000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-307000 -n no-preload-307000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-307000 -n no-preload-307000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-777000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0124 10:36:23.628803    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:36:31.984171    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:36:36.612065    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:36:51.045677    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 10:37:04.303610    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-777000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (1m13.929057777s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-777000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c66546c5-3af4-4a02-aac2-0789ea1780b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c66546c5-3af4-4a02-aac2-0789ea1780b0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.013127819s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-777000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-777000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-777000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-777000 --alsologtostderr -v=3
E0124 10:37:36.593111    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kindnet-129000/client.crt: no such file or directory
E0124 10:37:41.840238    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:37:45.210334    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-777000 --alsologtostderr -v=3: (10.995656159s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-777000 -n embed-certs-777000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-777000 -n embed-certs-777000: exit status 7 (116.393775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-777000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (306.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-777000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0124 10:37:53.730515    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
E0124 10:38:09.529415    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/calico-129000/client.crt: no such file or directory
E0124 10:38:26.342366    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/enable-default-cni-129000/client.crt: no such file or directory
E0124 10:38:39.816944    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:39:07.502746    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
E0124 10:39:37.432066    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/bridge-129000/client.crt: no such file or directory
E0124 10:40:14.462308    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory
E0124 10:40:19.565380    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:19.570712    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:19.581091    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:19.602208    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:19.644467    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:19.724779    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:19.885675    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:20.206054    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:20.848202    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:22.129509    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:24.689731    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:29.811017    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:40.052017    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:40:43.356584    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory
E0124 10:41:00.532531    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:41:32.019664    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory
E0124 10:41:36.646060    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/custom-flannel-129000/client.crt: no such file or directory
E0124 10:41:41.492969    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
E0124 10:41:51.078649    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 10:42:06.440346    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/auto-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-777000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (5m5.460288836s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-777000 -n embed-certs-777000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (306.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dvvnz" [96c696f7-f915-4240-a101-290154e19f12] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0124 10:42:53.733952    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/skaffold-721000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dvvnz" [96c696f7-f915-4240-a101-290154e19f12] Running
E0124 10:42:55.066550    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.01914106s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dvvnz" [96c696f7-f915-4240-a101-290154e19f12] Running
E0124 10:43:03.413956    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/no-preload-307000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007486974s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-777000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-777000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-777000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-777000 -n embed-certs-777000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-777000 -n embed-certs-777000: exit status 2 (470.515065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-777000 -n embed-certs-777000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-777000 -n embed-certs-777000: exit status 2 (435.098669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-777000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-777000 -n embed-certs-777000

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-777000 -n embed-certs-777000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-436000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0124 10:43:14.132086    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-436000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (1m6.617911558s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-436000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e6bc8dd-9a78-4b51-a290-ce9da9c0fd54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e6bc8dd-9a78-4b51-a290-ce9da9c0fd54] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.013683694s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-436000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-436000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-436000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-436000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-436000 --alsologtostderr -v=3: (10.921236047s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000: exit status 7 (115.370015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-436000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-436000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-436000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (5m13.199545466s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-97xx5" [fba2d68f-1e0e-46b5-8e26-9d7cf9d08e9e] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016933271s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-97xx5" [fba2d68f-1e0e-46b5-8e26-9d7cf9d08e9e] Running
E0124 10:50:02.867552    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/false-129000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008678361s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-436000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-436000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000: exit status 2 (429.704598ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000: exit status 2 (435.69584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-436000 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-436000 -n default-k8s-diff-port-436000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-783000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
E0124 10:50:14.465795    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/kubenet-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-783000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (1m4.004707225s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-783000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-783000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-783000 --alsologtostderr -v=3: (10.86415439s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-783000 -n newest-cni-783000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-783000 -n newest-cni-783000: exit status 7 (114.669285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-783000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-783000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
E0124 10:51:32.022283    4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/flannel-129000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-783000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (25.196961677s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-783000 -n newest-cni-783000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-783000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-783000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-783000 -n newest-cni-783000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-783000 -n newest-cni-783000: exit status 2 (435.473947ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-783000 -n newest-cni-783000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-783000 -n newest-cni-783000: exit status 2 (430.397981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-783000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-783000 -n newest-cni-783000

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-783000 -n newest-cni-783000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.44s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 9.936729ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-pgr97" [79d33def-fc7f-4a3d-9647-376066b8066b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010579117s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zsrcz" [2f2caa07-27d2-45d0-a341-96766be897ad] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008631668s
addons_test.go:305: (dbg) Run:  kubectl --context addons-709000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-709000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-709000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.804553985s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.93s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-709000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-709000 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:210: (dbg) Run:  kubectl --context addons-709000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c9a7136b-ba15-4282-86cf-2ad138dcd5a9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [c9a7136b-ba15-4282-86cf-2ad138dcd5a9] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007698863s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-709000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.45s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-997000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-997000 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-lfzlk" [d99d41b8-d90d-458c-80f8-33feba4d8cd9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-lfzlk" [d99d41b8-d90d-458c-80f8-33feba4d8cd9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.010825634s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-129000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-129000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-129000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-129000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-129000"

                                                
                                                
----------------------- debugLogs end: cilium-129000 [took: 6.323738921s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-129000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-129000
--- SKIP: TestNetworkPlugins/group/cilium (6.81s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-724000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-724000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard