Test Report: Docker_macOS 15310

                    
                      af24d50c21096344c09c5fff0b9181d55a181bf0:2022-11-07:26449
                    
                

Test fail (16/295)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-085453 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1107 08:55:46.946322    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:58:03.112402    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:58:30.839671    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:58:30.841778    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.848219    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.858936    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.881118    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.922066    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:31.002523    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:31.164723    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:31.485460    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:32.127532    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:33.408657    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:35.969038    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:41.091551    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:51.332018    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-085453 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.413111419s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-085453] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-085453 in cluster ingress-addon-legacy-085453
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 08:54:53.444564    5852 out.go:296] Setting OutFile to fd 1 ...
	I1107 08:54:53.444729    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:54:53.444735    5852 out.go:309] Setting ErrFile to fd 2...
	I1107 08:54:53.444739    5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:54:53.444846    5852 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 08:54:53.445404    5852 out.go:303] Setting JSON to false
	I1107 08:54:53.464920    5852 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1468,"bootTime":1667838625,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 08:54:53.465008    5852 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 08:54:53.486920    5852 out.go:177] * [ingress-addon-legacy-085453] minikube v1.28.0 on Darwin 13.0
	I1107 08:54:53.530004    5852 notify.go:220] Checking for updates...
	I1107 08:54:53.551867    5852 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 08:54:53.572849    5852 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 08:54:53.594622    5852 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 08:54:53.616090    5852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 08:54:53.637827    5852 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 08:54:53.660202    5852 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 08:54:53.722250    5852 docker.go:137] docker version: linux-20.10.20
	I1107 08:54:53.722406    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:54:53.865180    5852 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 16:54:53.79305721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:54:53.908878    5852 out.go:177] * Using the docker driver based on user configuration
	I1107 08:54:53.930985    5852 start.go:282] selected driver: docker
	I1107 08:54:53.931019    5852 start.go:808] validating driver "docker" against <nil>
	I1107 08:54:53.931042    5852 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 08:54:53.934905    5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:54:54.075367    5852 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 16:54:53.984806793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:54:54.075488    5852 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 08:54:54.075626    5852 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 08:54:54.097581    5852 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 08:54:54.119112    5852 cni.go:95] Creating CNI manager for ""
	I1107 08:54:54.119148    5852 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 08:54:54.119180    5852 start_flags.go:317] config:
	{Name:ingress-addon-legacy-085453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 08:54:54.141341    5852 out.go:177] * Starting control plane node ingress-addon-legacy-085453 in cluster ingress-addon-legacy-085453
	I1107 08:54:54.184106    5852 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 08:54:54.206289    5852 out.go:177] * Pulling base image ...
	I1107 08:54:54.248161    5852 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 08:54:54.248170    5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 08:54:54.301010    5852 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1107 08:54:54.301039    5852 cache.go:57] Caching tarball of preloaded images
	I1107 08:54:54.301235    5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 08:54:54.344305    5852 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1107 08:54:54.366170    5852 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1107 08:54:54.368898    5852 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 08:54:54.368921    5852 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 08:54:54.442694    5852 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1107 08:54:58.975377    5852 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1107 08:54:58.975540    5852 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1107 08:54:59.600943    5852 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1107 08:54:59.601179    5852 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/config.json ...
	I1107 08:54:59.601212    5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/config.json: {Name:mkd87cb689386a98c42ec5c9221126cd7a0cd281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:54:59.601483    5852 cache.go:208] Successfully downloaded all kic artifacts
	I1107 08:54:59.601509    5852 start.go:364] acquiring machines lock for ingress-addon-legacy-085453: {Name:mk63bffbe8a3bd903498e250074e58ae13193d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 08:54:59.601601    5852 start.go:368] acquired machines lock for "ingress-addon-legacy-085453" in 84.374µs
	I1107 08:54:59.601627    5852 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-085453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 08:54:59.601675    5852 start.go:125] createHost starting for "" (driver="docker")
	I1107 08:54:59.623206    5852 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1107 08:54:59.623614    5852 start.go:159] libmachine.API.Create for "ingress-addon-legacy-085453" (driver="docker")
	I1107 08:54:59.623660    5852 client.go:168] LocalClient.Create starting
	I1107 08:54:59.623857    5852 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem
	I1107 08:54:59.623951    5852 main.go:134] libmachine: Decoding PEM data...
	I1107 08:54:59.623985    5852 main.go:134] libmachine: Parsing certificate...
	I1107 08:54:59.624084    5852 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem
	I1107 08:54:59.624154    5852 main.go:134] libmachine: Decoding PEM data...
	I1107 08:54:59.624171    5852 main.go:134] libmachine: Parsing certificate...
	I1107 08:54:59.645755    5852 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-085453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 08:54:59.702275    5852 cli_runner.go:211] docker network inspect ingress-addon-legacy-085453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 08:54:59.702397    5852 network_create.go:272] running [docker network inspect ingress-addon-legacy-085453] to gather additional debugging logs...
	I1107 08:54:59.702418    5852 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-085453
	W1107 08:54:59.756870    5852 cli_runner.go:211] docker network inspect ingress-addon-legacy-085453 returned with exit code 1
	I1107 08:54:59.756902    5852 network_create.go:275] error running [docker network inspect ingress-addon-legacy-085453]: docker network inspect ingress-addon-legacy-085453: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-085453
	I1107 08:54:59.756924    5852 network_create.go:277] output of [docker network inspect ingress-addon-legacy-085453]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-085453
	
	** /stderr **
	I1107 08:54:59.757034    5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 08:54:59.811673    5852 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000834218] misses:0}
	I1107 08:54:59.811714    5852 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 08:54:59.811729    5852 network_create.go:115] attempt to create docker network ingress-addon-legacy-085453 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 08:54:59.811827    5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 ingress-addon-legacy-085453
	I1107 08:54:59.980214    5852 network_create.go:99] docker network ingress-addon-legacy-085453 192.168.49.0/24 created
	I1107 08:54:59.980253    5852 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-085453" container
	I1107 08:54:59.980381    5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 08:55:00.035644    5852 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-085453 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --label created_by.minikube.sigs.k8s.io=true
	I1107 08:55:00.089720    5852 oci.go:103] Successfully created a docker volume ingress-addon-legacy-085453
	I1107 08:55:00.089858    5852 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-085453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --entrypoint /usr/bin/test -v ingress-addon-legacy-085453:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 08:55:00.537187    5852 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-085453
	I1107 08:55:00.537241    5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 08:55:00.537256    5852 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 08:55:00.537384    5852 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-085453:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 08:55:05.232150    5852 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-085453:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.694615763s)
	I1107 08:55:05.232180    5852 kic.go:188] duration metric: took 4.694849 seconds to extract preloaded images to volume
	I1107 08:55:05.232315    5852 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 08:55:05.372048    5852 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-085453 --name ingress-addon-legacy-085453 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --network ingress-addon-legacy-085453 --ip 192.168.49.2 --volume ingress-addon-legacy-085453:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 08:55:05.723106    5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Running}}
	I1107 08:55:05.784816    5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
	I1107 08:55:05.847501    5852 cli_runner.go:164] Run: docker exec ingress-addon-legacy-085453 stat /var/lib/dpkg/alternatives/iptables
	I1107 08:55:05.964212    5852 oci.go:144] the created container "ingress-addon-legacy-085453" has a running status.
	I1107 08:55:05.964241    5852 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa...
	I1107 08:55:06.251083    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 08:55:06.251163    5852 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 08:55:06.348722    5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
	I1107 08:55:06.404493    5852 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 08:55:06.404509    5852 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-085453 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 08:55:06.503249    5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
	I1107 08:55:06.559643    5852 machine.go:88] provisioning docker machine ...
	I1107 08:55:06.559684    5852 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-085453"
	I1107 08:55:06.559792    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:06.617137    5852 main.go:134] libmachine: Using SSH client type: native
	I1107 08:55:06.617338    5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50511 <nil> <nil>}
	I1107 08:55:06.617356    5852 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-085453 && echo "ingress-addon-legacy-085453" | sudo tee /etc/hostname
	I1107 08:55:06.739845    5852 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-085453
	
	I1107 08:55:06.739968    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:06.796544    5852 main.go:134] libmachine: Using SSH client type: native
	I1107 08:55:06.796702    5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50511 <nil> <nil>}
	I1107 08:55:06.796725    5852 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-085453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-085453/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-085453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 08:55:06.913010    5852 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 08:55:06.913031    5852 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 08:55:06.913051    5852 ubuntu.go:177] setting up certificates
	I1107 08:55:06.913060    5852 provision.go:83] configureAuth start
	I1107 08:55:06.913161    5852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-085453
	I1107 08:55:06.969038    5852 provision.go:138] copyHostCerts
	I1107 08:55:06.969092    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 08:55:06.969162    5852 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 08:55:06.969170    5852 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 08:55:06.969278    5852 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 08:55:06.969449    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 08:55:06.969490    5852 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 08:55:06.969494    5852 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 08:55:06.969560    5852 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 08:55:06.969690    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 08:55:06.969724    5852 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 08:55:06.969729    5852 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 08:55:06.969807    5852 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 08:55:06.969952    5852 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-085453 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-085453]
	I1107 08:55:07.039589    5852 provision.go:172] copyRemoteCerts
	I1107 08:55:07.039652    5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 08:55:07.039713    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:07.097248    5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
	I1107 08:55:07.182985    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 08:55:07.183072    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 08:55:07.199588    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 08:55:07.199672    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1107 08:55:07.216324    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 08:55:07.216408    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 08:55:07.232805    5852 provision.go:86] duration metric: configureAuth took 319.72806ms
	I1107 08:55:07.232820    5852 ubuntu.go:193] setting minikube options for container-runtime
	I1107 08:55:07.232978    5852 config.go:180] Loaded profile config "ingress-addon-legacy-085453": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 08:55:07.233090    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:07.289244    5852 main.go:134] libmachine: Using SSH client type: native
	I1107 08:55:07.289403    5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50511 <nil> <nil>}
	I1107 08:55:07.289418    5852 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 08:55:07.404958    5852 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 08:55:07.404975    5852 ubuntu.go:71] root file system type: overlay
	I1107 08:55:07.405161    5852 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 08:55:07.405267    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:07.463497    5852 main.go:134] libmachine: Using SSH client type: native
	I1107 08:55:07.463660    5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50511 <nil> <nil>}
	I1107 08:55:07.463709    5852 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 08:55:07.594714    5852 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 08:55:07.594834    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:07.651340    5852 main.go:134] libmachine: Using SSH client type: native
	I1107 08:55:07.651497    5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 50511 <nil> <nil>}
	I1107 08:55:07.651512    5852 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 08:55:08.229223    5852 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 16:55:07.602246042 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 08:55:08.229247    5852 machine.go:91] provisioned docker machine in 1.669558122s
	I1107 08:55:08.229254    5852 client.go:171] LocalClient.Create took 8.605450332s
	I1107 08:55:08.229275    5852 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-085453" took 8.605525265s
	I1107 08:55:08.229286    5852 start.go:300] post-start starting for "ingress-addon-legacy-085453" (driver="docker")
	I1107 08:55:08.229290    5852 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 08:55:08.229365    5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 08:55:08.229439    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:08.286959    5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
	I1107 08:55:08.375817    5852 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 08:55:08.379457    5852 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 08:55:08.379475    5852 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 08:55:08.379485    5852 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 08:55:08.379490    5852 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 08:55:08.379511    5852 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 08:55:08.379613    5852 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 08:55:08.379795    5852 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 08:55:08.379802    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /etc/ssl/certs/32672.pem
	I1107 08:55:08.380013    5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 08:55:08.387209    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 08:55:08.404198    5852 start.go:303] post-start completed in 174.896092ms
	I1107 08:55:08.404761    5852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-085453
	I1107 08:55:08.478220    5852 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/config.json ...
	I1107 08:55:08.478685    5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 08:55:08.478764    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:08.535353    5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
	I1107 08:55:08.617924    5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 08:55:08.622420    5852 start.go:128] duration metric: createHost completed in 9.020590193s
	I1107 08:55:08.622439    5852 start.go:83] releasing machines lock for "ingress-addon-legacy-085453", held for 9.020686019s
	I1107 08:55:08.622543    5852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-085453
	I1107 08:55:08.678128    5852 ssh_runner.go:195] Run: systemctl --version
	I1107 08:55:08.678130    5852 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1107 08:55:08.678210    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:08.678221    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:08.739156    5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
	I1107 08:55:08.739776    5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
	I1107 08:55:09.070766    5852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 08:55:09.080798    5852 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 08:55:09.080881    5852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 08:55:09.089857    5852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 08:55:09.102237    5852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 08:55:09.170415    5852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 08:55:09.233769    5852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 08:55:09.298241    5852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 08:55:09.503525    5852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 08:55:09.533469    5852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 08:55:09.603684    5852 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
	I1107 08:55:09.603912    5852 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-085453 dig +short host.docker.internal
	I1107 08:55:09.722681    5852 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 08:55:09.722795    5852 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 08:55:09.727054    5852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 08:55:09.737128    5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:55:09.793842    5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 08:55:09.793932    5852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 08:55:09.817255    5852 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1107 08:55:09.817274    5852 docker.go:543] Images already preloaded, skipping extraction
	I1107 08:55:09.817364    5852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 08:55:09.839726    5852 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1107 08:55:09.839750    5852 cache_images.go:84] Images are preloaded, skipping loading
	I1107 08:55:09.839864    5852 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 08:55:09.906384    5852 cni.go:95] Creating CNI manager for ""
	I1107 08:55:09.906398    5852 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 08:55:09.906414    5852 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 08:55:09.906434    5852 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-085453 NodeName:ingress-addon-legacy-085453 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 08:55:09.906563    5852 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-085453"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 08:55:09.906650    5852 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-085453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 08:55:09.906732    5852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1107 08:55:09.914375    5852 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 08:55:09.914436    5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 08:55:09.921428    5852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1107 08:55:09.933812    5852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1107 08:55:09.946127    5852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I1107 08:55:09.958577    5852 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 08:55:09.962176    5852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 08:55:09.971547    5852 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453 for IP: 192.168.49.2
	I1107 08:55:09.971684    5852 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 08:55:09.971759    5852 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 08:55:09.971816    5852 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.key
	I1107 08:55:09.971836    5852 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.crt with IP's: []
	I1107 08:55:10.380145    5852 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.crt ...
	I1107 08:55:10.380162    5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.crt: {Name:mk942764529c7e206d68dbdd491c39f2f3870744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:55:10.380468    5852 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.key ...
	I1107 08:55:10.380477    5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.key: {Name:mkd0ada144a28bdd30dbfe767b0b675765b4b996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:55:10.380715    5852 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2
	I1107 08:55:10.380735    5852 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 08:55:10.468199    5852 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2 ...
	I1107 08:55:10.468207    5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2: {Name:mk56c82efb76b2092397c0435a922be028cad462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:55:10.468504    5852 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2 ...
	I1107 08:55:10.468512    5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2: {Name:mk4985429ae2d8822661c88f53b10c5e2aaa43a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:55:10.468736    5852 certs.go:320] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt
	I1107 08:55:10.468891    5852 certs.go:324] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key
	I1107 08:55:10.469052    5852 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key
	I1107 08:55:10.469069    5852 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt with IP's: []
	I1107 08:55:10.557531    5852 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt ...
	I1107 08:55:10.557542    5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt: {Name:mk4f5c9510e2e83eb58e6ef1560e201884dd0ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:55:10.557805    5852 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key ...
	I1107 08:55:10.557818    5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key: {Name:mk38f57a18f2d01999f61a0945dd3f6ad55b5f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:55:10.558022    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 08:55:10.558064    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 08:55:10.558090    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 08:55:10.558114    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 08:55:10.558138    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 08:55:10.558162    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 08:55:10.558184    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 08:55:10.558205    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 08:55:10.558299    5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 08:55:10.558350    5852 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 08:55:10.558365    5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 08:55:10.558405    5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 08:55:10.558438    5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 08:55:10.558479    5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 08:55:10.558556    5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 08:55:10.558617    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem -> /usr/share/ca-certificates/3267.pem
	I1107 08:55:10.558642    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /usr/share/ca-certificates/32672.pem
	I1107 08:55:10.558664    5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 08:55:10.559128    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 08:55:10.576623    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 08:55:10.592951    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 08:55:10.609703    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 08:55:10.626752    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 08:55:10.643004    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 08:55:10.659869    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 08:55:10.676464    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 08:55:10.693044    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 08:55:10.709783    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 08:55:10.726516    5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 08:55:10.743384    5852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 08:55:10.755827    5852 ssh_runner.go:195] Run: openssl version
	I1107 08:55:10.761243    5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 08:55:10.768705    5852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 08:55:10.772466    5852 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 08:55:10.772519    5852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 08:55:10.777794    5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 08:55:10.785524    5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 08:55:10.793113    5852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 08:55:10.796923    5852 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 08:55:10.796981    5852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 08:55:10.802084    5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 08:55:10.809693    5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 08:55:10.817353    5852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 08:55:10.821029    5852 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 08:55:10.821087    5852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 08:55:10.826285    5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 08:55:10.834031    5852 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-085453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 08:55:10.834172    5852 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 08:55:10.856023    5852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 08:55:10.865314    5852 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 08:55:10.872667    5852 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 08:55:10.872727    5852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 08:55:10.880081    5852 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 08:55:10.880105    5852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 08:55:10.924891    5852 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1107 08:55:10.925247    5852 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 08:55:11.209867    5852 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 08:55:11.209960    5852 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 08:55:11.210041    5852 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 08:55:11.427340    5852 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 08:55:11.427846    5852 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 08:55:11.427877    5852 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 08:55:11.499836    5852 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 08:55:11.522711    5852 out.go:204]   - Generating certificates and keys ...
	I1107 08:55:11.522794    5852 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 08:55:11.522855    5852 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 08:55:11.733243    5852 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 08:55:11.901032    5852 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 08:55:12.081129    5852 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 08:55:12.151548    5852 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 08:55:12.267363    5852 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 08:55:12.267505    5852 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 08:55:12.550254    5852 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 08:55:12.550409    5852 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 08:55:12.708530    5852 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 08:55:13.183196    5852 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 08:55:13.256719    5852 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 08:55:13.256823    5852 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 08:55:13.394360    5852 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 08:55:13.599991    5852 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 08:55:13.693029    5852 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 08:55:13.779645    5852 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 08:55:13.780395    5852 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 08:55:13.823934    5852 out.go:204]   - Booting up control plane ...
	I1107 08:55:13.824122    5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 08:55:13.824288    5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 08:55:13.824417    5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 08:55:13.824554    5852 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 08:55:13.824836    5852 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 08:55:53.760814    5852 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 08:55:53.761335    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:55:53.761472    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:55:58.759856    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:55:58.760079    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:56:08.754431    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:56:08.754636    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:56:28.747116    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:56:28.747424    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:57:08.747066    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:57:08.747605    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:57:08.747620    5852 kubeadm.go:317] 
	I1107 08:57:08.747798    5852 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1107 08:57:08.747949    5852 kubeadm.go:317] 		timed out waiting for the condition
	I1107 08:57:08.747966    5852 kubeadm.go:317] 
	I1107 08:57:08.748022    5852 kubeadm.go:317] 	This error is likely caused by:
	I1107 08:57:08.748100    5852 kubeadm.go:317] 		- The kubelet is not running
	I1107 08:57:08.748295    5852 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 08:57:08.748311    5852 kubeadm.go:317] 
	I1107 08:57:08.748439    5852 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 08:57:08.748475    5852 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1107 08:57:08.748513    5852 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1107 08:57:08.748519    5852 kubeadm.go:317] 
	I1107 08:57:08.748637    5852 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 08:57:08.748746    5852 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1107 08:57:08.748754    5852 kubeadm.go:317] 
	I1107 08:57:08.748828    5852 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1107 08:57:08.748891    5852 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1107 08:57:08.748996    5852 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1107 08:57:08.749024    5852 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1107 08:57:08.749031    5852 kubeadm.go:317] 
	I1107 08:57:08.752746    5852 kubeadm.go:317] W1107 16:55:10.936244     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 08:57:08.752824    5852 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 08:57:08.752943    5852 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
	I1107 08:57:08.753027    5852 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 08:57:08.753121    5852 kubeadm.go:317] W1107 16:55:13.798255     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 08:57:08.753216    5852 kubeadm.go:317] W1107 16:55:13.799084     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 08:57:08.753280    5852 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 08:57:08.753339    5852 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1107 08:57:08.753533    5852 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 16:55:10.936244     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 16:55:13.798255     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 16:55:13.799084     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 16:55:10.936244     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 16:55:13.798255     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 16:55:13.799084     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 08:57:08.753565    5852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1107 08:57:09.165819    5852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 08:57:09.175219    5852 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 08:57:09.175285    5852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 08:57:09.182909    5852 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 08:57:09.182942    5852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 08:57:09.230127    5852 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1107 08:57:09.230176    5852 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 08:57:09.515797    5852 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 08:57:09.515886    5852 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 08:57:09.515984    5852 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 08:57:09.728894    5852 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 08:57:09.729719    5852 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 08:57:09.729785    5852 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 08:57:09.802358    5852 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 08:57:09.823997    5852 out.go:204]   - Generating certificates and keys ...
	I1107 08:57:09.824069    5852 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 08:57:09.824129    5852 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 08:57:09.824213    5852 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 08:57:09.824282    5852 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 08:57:09.824363    5852 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 08:57:09.824426    5852 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 08:57:09.824497    5852 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 08:57:09.824541    5852 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 08:57:09.824622    5852 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 08:57:09.824692    5852 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 08:57:09.824725    5852 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 08:57:09.824764    5852 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 08:57:10.045059    5852 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 08:57:10.113668    5852 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 08:57:10.230092    5852 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 08:57:10.313926    5852 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 08:57:10.314534    5852 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 08:57:10.336219    5852 out.go:204]   - Booting up control plane ...
	I1107 08:57:10.336402    5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 08:57:10.336522    5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 08:57:10.336646    5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 08:57:10.336764    5852 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 08:57:10.336995    5852 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 08:57:50.314839    5852 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 08:57:50.315488    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:57:50.315642    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:57:55.313117    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:57:55.313282    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:58:05.308726    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:58:05.308963    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:58:25.295449    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:58:25.295609    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:59:05.270610    5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 08:59:05.270906    5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 08:59:05.270924    5852 kubeadm.go:317] 
	I1107 08:59:05.270975    5852 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1107 08:59:05.271053    5852 kubeadm.go:317] 		timed out waiting for the condition
	I1107 08:59:05.271070    5852 kubeadm.go:317] 
	I1107 08:59:05.271106    5852 kubeadm.go:317] 	This error is likely caused by:
	I1107 08:59:05.271147    5852 kubeadm.go:317] 		- The kubelet is not running
	I1107 08:59:05.271261    5852 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 08:59:05.271278    5852 kubeadm.go:317] 
	I1107 08:59:05.271383    5852 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 08:59:05.271424    5852 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1107 08:59:05.271459    5852 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1107 08:59:05.271464    5852 kubeadm.go:317] 
	I1107 08:59:05.271566    5852 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 08:59:05.271666    5852 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1107 08:59:05.271678    5852 kubeadm.go:317] 
	I1107 08:59:05.271803    5852 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1107 08:59:05.271856    5852 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1107 08:59:05.271941    5852 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1107 08:59:05.271975    5852 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1107 08:59:05.271983    5852 kubeadm.go:317] 
	I1107 08:59:05.274658    5852 kubeadm.go:317] W1107 16:57:09.212600    3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 08:59:05.274725    5852 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 08:59:05.274860    5852 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
	I1107 08:59:05.274960    5852 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 08:59:05.275057    5852 kubeadm.go:317] W1107 16:57:10.300670    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 08:59:05.275156    5852 kubeadm.go:317] W1107 16:57:10.301471    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 08:59:05.275234    5852 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 08:59:05.275289    5852 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 08:59:05.275309    5852 kubeadm.go:398] StartCluster complete in 3m54.386737293s
	I1107 08:59:05.275404    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 08:59:05.297063    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.297076    5852 logs.go:276] No container was found matching "kube-apiserver"
	I1107 08:59:05.297165    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 08:59:05.318062    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.318074    5852 logs.go:276] No container was found matching "etcd"
	I1107 08:59:05.318158    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 08:59:05.339967    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.339979    5852 logs.go:276] No container was found matching "coredns"
	I1107 08:59:05.340061    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 08:59:05.361060    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.361072    5852 logs.go:276] No container was found matching "kube-scheduler"
	I1107 08:59:05.361159    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 08:59:05.383943    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.383954    5852 logs.go:276] No container was found matching "kube-proxy"
	I1107 08:59:05.384039    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 08:59:05.405732    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.405745    5852 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 08:59:05.405825    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 08:59:05.426879    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.426890    5852 logs.go:276] No container was found matching "storage-provisioner"
	I1107 08:59:05.426983    5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 08:59:05.448113    5852 logs.go:274] 0 containers: []
	W1107 08:59:05.448126    5852 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 08:59:05.448133    5852 logs.go:123] Gathering logs for dmesg ...
	I1107 08:59:05.448140    5852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 08:59:05.459904    5852 logs.go:123] Gathering logs for describe nodes ...
	I1107 08:59:05.459916    5852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 08:59:05.511297    5852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 08:59:05.511308    5852 logs.go:123] Gathering logs for Docker ...
	I1107 08:59:05.511314    5852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 08:59:05.526573    5852 logs.go:123] Gathering logs for container status ...
	I1107 08:59:05.526585    5852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 08:59:07.578517    5852 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051866863s)
	I1107 08:59:07.578628    5852 logs.go:123] Gathering logs for kubelet ...
	I1107 08:59:07.578634    5852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1107 08:59:07.616682    5852 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 16:57:09.212600    3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 16:57:10.300670    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 16:57:10.301471    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 08:59:07.616706    5852 out.go:239] * 
	* 
	W1107 08:59:07.616826    5852 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 16:57:09.212600    3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 16:57:10.300670    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 16:57:10.301471    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 16:57:09.212600    3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 16:57:10.300670    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 16:57:10.301471    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 08:59:07.616850    5852 out.go:239] * 
	* 
	W1107 08:59:07.617502    5852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 08:59:07.682267    5852 out.go:177] 
	W1107 08:59:07.726668    5852 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 16:57:09.212600    3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 16:57:10.300670    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 16:57:10.301471    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 16:57:09.212600    3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 16:57:10.300670    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 16:57:10.301471    3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 08:59:07.726730    5852 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 08:59:07.726774    5852 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 08:59:07.775798    5852 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-085453 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-085453 addons enable ingress --alsologtostderr -v=5
E1107 08:59:11.813963    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:59:52.777488    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-085453 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.118695977s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 08:59:07.945377    6174 out.go:296] Setting OutFile to fd 1 ...
	I1107 08:59:07.945667    6174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:59:07.945674    6174 out.go:309] Setting ErrFile to fd 2...
	I1107 08:59:07.945678    6174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:59:07.945798    6174 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 08:59:07.967419    6174 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1107 08:59:07.988594    6174 config.go:180] Loaded profile config "ingress-addon-legacy-085453": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 08:59:07.988615    6174 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-085453"
	I1107 08:59:07.988629    6174 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-085453"
	I1107 08:59:07.988948    6174 host.go:66] Checking if "ingress-addon-legacy-085453" exists ...
	I1107 08:59:07.989490    6174 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
	I1107 08:59:08.067677    6174 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1107 08:59:08.088735    6174 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I1107 08:59:08.110414    6174 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1107 08:59:08.131796    6174 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1107 08:59:08.153440    6174 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 08:59:08.153461    6174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I1107 08:59:08.153562    6174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 08:59:08.209813    6174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
	I1107 08:59:08.307563    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:08.358391    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:08.358410    6174 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:08.634760    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:08.686930    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:08.686945    6174 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:09.229508    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:09.283648    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:09.283666    6174 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:09.941110    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:09.994355    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:09.994375    6174 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:10.786150    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:10.838456    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:10.838477    6174 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:12.009317    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:12.063367    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:12.063382    6174 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:14.318829    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:14.369653    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:14.369668    6174 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:15.982754    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:16.034485    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:16.034499    6174 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:18.840642    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:18.892939    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:18.892955    6174 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:22.719301    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:22.772562    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:22.772579    6174 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:30.470437    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:30.525417    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:30.525436    6174 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:45.161809    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 08:59:45.213937    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 08:59:45.213954    6174 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:13.621506    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 09:00:13.673786    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:13.673801    6174 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:36.844915    6174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 09:00:36.896991    6174 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:36.897019    6174 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-085453"
	I1107 09:00:36.918995    6174 out.go:177] * Verifying ingress addon...
	I1107 09:00:36.941978    6174 out.go:177] 
	W1107 09:00:36.963510    6174 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-085453" does not exist: client config: context "ingress-addon-legacy-085453" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-085453" does not exist: client config: context "ingress-addon-legacy-085453" does not exist]
	W1107 09:00:36.963526    6174 out.go:239] * 
	* 
	W1107 09:00:36.966080    6174 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:00:36.987642    6174 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-085453
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-085453:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518",
	        "Created": "2022-11-07T16:55:05.433877824Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T16:55:05.719624117Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/hostname",
	        "HostsPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/hosts",
	        "LogPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518-json.log",
	        "Name": "/ingress-addon-legacy-085453",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-085453:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-085453",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-085453",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-085453/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-085453",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-085453",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-085453",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d824b3c30574e950e9ce9f252a4f9d905798a3d7143c66a1d4524f3c4a01a6d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50511"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50510"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d824b3c30574",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-085453": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5846d4f80351",
	                        "ingress-addon-legacy-085453"
	                    ],
	                    "NetworkID": "66653d9c0a6458ea77ffb3751abafe404eafa71b0132b9111c438256e9b85028",
	                    "EndpointID": "8ebb299adec2d6b4410c816554a7ec9e2f1b131640615fcb658e2c83408f0ea3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-085453 -n ingress-addon-legacy-085453
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-085453 -n ingress-addon-legacy-085453: exit status 6 (390.486213ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:00:37.451435    6291 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-085453" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-085453" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-085453 addons enable ingress-dns --alsologtostderr -v=5
E1107 09:01:14.699771    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-085453 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.053135976s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 09:00:37.515317    6302 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:00:37.515586    6302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:00:37.515592    6302 out.go:309] Setting ErrFile to fd 2...
	I1107 09:00:37.515596    6302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:00:37.515718    6302 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:00:37.537820    6302 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1107 09:00:37.560227    6302 config.go:180] Loaded profile config "ingress-addon-legacy-085453": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 09:00:37.560259    6302 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-085453"
	I1107 09:00:37.560270    6302 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-085453"
	I1107 09:00:37.560832    6302 host.go:66] Checking if "ingress-addon-legacy-085453" exists ...
	I1107 09:00:37.561773    6302 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
	I1107 09:00:37.639933    6302 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1107 09:00:37.661662    6302 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1107 09:00:37.683534    6302 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 09:00:37.683571    6302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1107 09:00:37.683765    6302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
	I1107 09:00:37.742486    6302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
	I1107 09:00:37.834516    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:37.884107    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:37.884125    6302 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:38.160471    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:38.212624    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:38.212643    6302 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:38.755153    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:38.808380    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:38.808394    6302 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:39.463671    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:39.515204    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:39.515219    6302 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:40.308775    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:40.362786    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:40.362801    6302 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:41.535406    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:41.588147    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:41.588171    6302 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:43.841857    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:43.894107    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:43.894134    6302 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:45.505230    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:45.557371    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:45.557389    6302 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:48.364176    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:48.416405    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:48.416419    6302 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:52.242731    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:00:52.296070    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:52.296085    6302 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:00:59.994196    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:01:00.047239    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:01:00.047256    6302 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:01:14.684848    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:01:14.737929    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:01:14.737943    6302 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:01:43.147689    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:01:43.199005    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:01:43.199021    6302 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:02:06.370160    6302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 09:02:06.422464    6302 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 09:02:06.444634    6302 out.go:177] 
	W1107 09:02:06.467283    6302 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1107 09:02:06.467309    6302 out.go:239] * 
	* 
	W1107 09:02:06.471225    6302 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:02:06.493210    6302 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-085453
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-085453:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518",
	        "Created": "2022-11-07T16:55:05.433877824Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T16:55:05.719624117Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/hostname",
	        "HostsPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/hosts",
	        "LogPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518-json.log",
	        "Name": "/ingress-addon-legacy-085453",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-085453:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-085453",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-085453",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-085453/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-085453",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-085453",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-085453",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d824b3c30574e950e9ce9f252a4f9d905798a3d7143c66a1d4524f3c4a01a6d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50511"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50510"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d824b3c30574",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-085453": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5846d4f80351",
	                        "ingress-addon-legacy-085453"
	                    ],
	                    "NetworkID": "66653d9c0a6458ea77ffb3751abafe404eafa71b0132b9111c438256e9b85028",
	                    "EndpointID": "8ebb299adec2d6b4410c816554a7ec9e2f1b131640615fcb658e2c83408f0ea3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-085453 -n ingress-addon-legacy-085453
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-085453 -n ingress-addon-legacy-085453: exit status 6 (393.783088ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:02:06.960130    6403 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-085453" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-085453" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:159: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-085453
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-085453:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518",
	        "Created": "2022-11-07T16:55:05.433877824Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T16:55:05.719624117Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/hostname",
	        "HostsPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/hosts",
	        "LogPath": "/var/lib/docker/containers/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518/5846d4f803513bd520e6a9ef2b7dc0947b1bc6ef49dd514b9dda4b84fae2d518-json.log",
	        "Name": "/ingress-addon-legacy-085453",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-085453:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-085453",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c5cf18185cea2f5399d716c20d515404ecf3f3ac9444aedab627141c032c333/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-085453",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-085453/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-085453",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-085453",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-085453",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d824b3c30574e950e9ce9f252a4f9d905798a3d7143c66a1d4524f3c4a01a6d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50511"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50510"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d824b3c30574",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-085453": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5846d4f80351",
	                        "ingress-addon-legacy-085453"
	                    ],
	                    "NetworkID": "66653d9c0a6458ea77ffb3751abafe404eafa71b0132b9111c438256e9b85028",
	                    "EndpointID": "8ebb299adec2d6b4410c816554a7ec9e2f1b131640615fcb658e2c83408f0ea3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-085453 -n ingress-addon-legacy-085453
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-085453 -n ingress-addon-legacy-085453: exit status 6 (388.518639ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:02:07.407460    6415 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-085453" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-085453" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (183.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-090641 --wait=true -v=8 --alsologtostderr --driver=docker 
E1107 09:13:03.133676    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 09:13:30.866315    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 09:14:53.923117    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-090641 --wait=true -v=8 --alsologtostderr --driver=docker : exit status 80 (2m58.895560405s)

                                                
                                                
-- stdout --
	* [multinode-090641] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-090641 in cluster multinode-090641
	* Pulling base image ...
	* Restarting existing docker container for "multinode-090641" ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-090641-m02 in cluster multinode-090641
	* Pulling base image ...
	* Restarting existing docker container for "multinode-090641-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 09:12:02.068696    9678 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:12:02.068876    9678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:12:02.068882    9678 out.go:309] Setting ErrFile to fd 2...
	I1107 09:12:02.068886    9678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:12:02.068997    9678 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:12:02.069491    9678 out.go:303] Setting JSON to false
	I1107 09:12:02.088055    9678 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2497,"bootTime":1667838625,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 09:12:02.088163    9678 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 09:12:02.110109    9678 out.go:177] * [multinode-090641] minikube v1.28.0 on Darwin 13.0
	I1107 09:12:02.152978    9678 notify.go:220] Checking for updates...
	I1107 09:12:02.174669    9678 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 09:12:02.195649    9678 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:02.217060    9678 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 09:12:02.238693    9678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 09:12:02.259951    9678 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 09:12:02.282515    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:02.283059    9678 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 09:12:02.343915    9678 docker.go:137] docker version: linux-20.10.20
	I1107 09:12:02.344059    9678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:12:02.486010    9678 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 17:12:02.400066327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:12:02.529501    9678 out.go:177] * Using the docker driver based on existing profile
	I1107 09:12:02.550766    9678 start.go:282] selected driver: docker
	I1107 09:12:02.550802    9678 start.go:808] validating driver "docker" against &{Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:02.551015    9678 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 09:12:02.551277    9678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:12:02.692763    9678 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 17:12:02.608782854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:12:02.695167    9678 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 09:12:02.695194    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:02.695203    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:02.695218    9678 start_flags.go:317] config:
	{Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:02.738804    9678 out.go:177] * Starting control plane node multinode-090641 in cluster multinode-090641
	I1107 09:12:02.760104    9678 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:12:02.782012    9678 out.go:177] * Pulling base image ...
	I1107 09:12:02.803932    9678 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:12:02.803967    9678 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:12:02.804029    9678 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 09:12:02.804048    9678 cache.go:57] Caching tarball of preloaded images
	I1107 09:12:02.804279    9678 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:12:02.804301    9678 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 09:12:02.805280    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:02.859827    9678 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:12:02.859842    9678 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:12:02.859851    9678 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:12:02.859890    9678 start.go:364] acquiring machines lock for multinode-090641: {Name:mk3bc128ea070c03d4d369f5843a2d85d99f9678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:12:02.859978    9678 start.go:368] acquired machines lock for "multinode-090641" in 68.063µs
	I1107 09:12:02.860001    9678 start.go:96] Skipping create...Using existing machine configuration
	I1107 09:12:02.860013    9678 fix.go:55] fixHost starting: 
	I1107 09:12:02.860255    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:02.915360    9678 fix.go:103] recreateIfNeeded on multinode-090641: state=Stopped err=<nil>
	W1107 09:12:02.915388    9678 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 09:12:02.937321    9678 out.go:177] * Restarting existing docker container for "multinode-090641" ...
	I1107 09:12:02.959280    9678 cli_runner.go:164] Run: docker start multinode-090641
	I1107 09:12:03.292828    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:03.350674    9678 kic.go:415] container "multinode-090641" state is running.
	I1107 09:12:03.351284    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641
	I1107 09:12:03.410873    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:03.411273    9678 machine.go:88] provisioning docker machine ...
	I1107 09:12:03.411294    9678 ubuntu.go:169] provisioning hostname "multinode-090641"
	I1107 09:12:03.411395    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:03.472430    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:03.472649    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:03.472666    9678 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-090641 && echo "multinode-090641" | sudo tee /etc/hostname
	I1107 09:12:03.599972    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-090641
	
	I1107 09:12:03.600096    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:03.662785    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:03.662944    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:03.662957    9678 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-090641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-090641/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-090641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:12:03.777945    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:03.777975    9678 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:12:03.777994    9678 ubuntu.go:177] setting up certificates
	I1107 09:12:03.778002    9678 provision.go:83] configureAuth start
	I1107 09:12:03.778098    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641
	I1107 09:12:03.835025    9678 provision.go:138] copyHostCerts
	I1107 09:12:03.835072    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:03.835144    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:12:03.835153    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:03.835259    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:12:03.835957    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:03.836060    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:12:03.836069    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:03.836176    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:12:03.836457    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:03.836723    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:12:03.836730    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:03.836807    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:12:03.836960    9678 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.multinode-090641 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-090641]
	I1107 09:12:04.134049    9678 provision.go:172] copyRemoteCerts
	I1107 09:12:04.134121    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:12:04.134190    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.193882    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:04.278492    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 09:12:04.278582    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:12:04.297166    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 09:12:04.297253    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1107 09:12:04.314709    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 09:12:04.314800    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 09:12:04.332187    9678 provision.go:86] duration metric: configureAuth took 554.156405ms
	I1107 09:12:04.332201    9678 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:12:04.332387    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:04.332468    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.388965    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:04.389109    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:04.389119    9678 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:12:04.506200    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:12:04.506213    9678 ubuntu.go:71] root file system type: overlay
	I1107 09:12:04.506360    9678 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:12:04.506461    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.562461    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:04.562611    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:04.562664    9678 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:12:04.690928    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:12:04.691027    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.749104    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:04.749274    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:04.749290    9678 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:12:04.875856    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:04.875875    9678 machine.go:91] provisioned docker machine in 1.464554493s
	I1107 09:12:04.875885    9678 start.go:300] post-start starting for "multinode-090641" (driver="docker")
	I1107 09:12:04.875891    9678 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:12:04.875966    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:12:04.876027    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.933346    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.019881    9678 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:12:05.023180    9678 command_runner.go:130] > NAME="Ubuntu"
	I1107 09:12:05.023189    9678 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1107 09:12:05.023193    9678 command_runner.go:130] > ID=ubuntu
	I1107 09:12:05.023205    9678 command_runner.go:130] > ID_LIKE=debian
	I1107 09:12:05.023211    9678 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1107 09:12:05.023214    9678 command_runner.go:130] > VERSION_ID="20.04"
	I1107 09:12:05.023218    9678 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 09:12:05.023222    9678 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 09:12:05.023226    9678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 09:12:05.023232    9678 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 09:12:05.023236    9678 command_runner.go:130] > VERSION_CODENAME=focal
	I1107 09:12:05.023242    9678 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1107 09:12:05.023285    9678 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:12:05.023296    9678 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:12:05.023306    9678 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:12:05.023311    9678 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:12:05.023318    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:12:05.023415    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:12:05.023619    9678 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:12:05.023625    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /etc/ssl/certs/32672.pem
	I1107 09:12:05.023830    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:12:05.030786    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:05.048235    9678 start.go:303] post-start completed in 172.336296ms
	I1107 09:12:05.048314    9678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:12:05.048377    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:05.104451    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.189375    9678 command_runner.go:130] > 6%
	I1107 09:12:05.189461    9678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:12:05.193616    9678 command_runner.go:130] > 92G
	I1107 09:12:05.193943    9678 fix.go:57] fixHost completed within 2.333872366s
	I1107 09:12:05.193955    9678 start.go:83] releasing machines lock for "multinode-090641", held for 2.333910993s
	I1107 09:12:05.194046    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641
	I1107 09:12:05.249832    9678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 09:12:05.249834    9678 ssh_runner.go:195] Run: systemctl --version
	I1107 09:12:05.249917    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:05.249916    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:05.308581    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.308811    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.392373    9678 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I1107 09:12:05.392395    9678 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I1107 09:12:05.448316    9678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 09:12:05.450329    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 09:12:05.457569    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1107 09:12:05.469721    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:05.542255    9678 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 09:12:05.622400    9678 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:12:05.631535    9678 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1107 09:12:05.631641    9678 command_runner.go:130] > [Unit]
	I1107 09:12:05.631649    9678 command_runner.go:130] > Description=Docker Application Container Engine
	I1107 09:12:05.631653    9678 command_runner.go:130] > Documentation=https://docs.docker.com
	I1107 09:12:05.631657    9678 command_runner.go:130] > BindsTo=containerd.service
	I1107 09:12:05.631662    9678 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1107 09:12:05.631666    9678 command_runner.go:130] > Wants=network-online.target
	I1107 09:12:05.631673    9678 command_runner.go:130] > Requires=docker.socket
	I1107 09:12:05.631676    9678 command_runner.go:130] > StartLimitBurst=3
	I1107 09:12:05.631680    9678 command_runner.go:130] > StartLimitIntervalSec=60
	I1107 09:12:05.631703    9678 command_runner.go:130] > [Service]
	I1107 09:12:05.631710    9678 command_runner.go:130] > Type=notify
	I1107 09:12:05.631715    9678 command_runner.go:130] > Restart=on-failure
	I1107 09:12:05.631721    9678 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1107 09:12:05.631734    9678 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1107 09:12:05.631741    9678 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1107 09:12:05.631746    9678 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1107 09:12:05.631752    9678 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1107 09:12:05.631758    9678 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1107 09:12:05.631763    9678 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1107 09:12:05.631773    9678 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1107 09:12:05.631781    9678 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1107 09:12:05.631784    9678 command_runner.go:130] > ExecStart=
	I1107 09:12:05.631796    9678 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1107 09:12:05.631802    9678 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1107 09:12:05.631815    9678 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1107 09:12:05.631820    9678 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1107 09:12:05.631828    9678 command_runner.go:130] > LimitNOFILE=infinity
	I1107 09:12:05.631835    9678 command_runner.go:130] > LimitNPROC=infinity
	I1107 09:12:05.631839    9678 command_runner.go:130] > LimitCORE=infinity
	I1107 09:12:05.631844    9678 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1107 09:12:05.631849    9678 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1107 09:12:05.631852    9678 command_runner.go:130] > TasksMax=infinity
	I1107 09:12:05.631855    9678 command_runner.go:130] > TimeoutStartSec=0
	I1107 09:12:05.631861    9678 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1107 09:12:05.631866    9678 command_runner.go:130] > Delegate=yes
	I1107 09:12:05.631872    9678 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1107 09:12:05.631876    9678 command_runner.go:130] > KillMode=process
	I1107 09:12:05.631882    9678 command_runner.go:130] > [Install]
	I1107 09:12:05.631887    9678 command_runner.go:130] > WantedBy=multi-user.target
	I1107 09:12:05.632535    9678 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:12:05.632605    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:12:05.642301    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:12:05.654247    9678 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:05.654264    9678 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:05.655354    9678 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:12:05.725219    9678 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:12:05.790467    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:05.848257    9678 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:12:06.110835    9678 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 09:12:06.180071    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:06.244694    9678 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 09:12:06.255614    9678 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 09:12:06.255698    9678 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 09:12:06.259469    9678 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1107 09:12:06.259485    9678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 09:12:06.259494    9678 command_runner.go:130] > Device: 97h/151d	Inode: 115         Links: 1
	I1107 09:12:06.259500    9678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1107 09:12:06.259506    9678 command_runner.go:130] > Access: 2022-11-07 17:12:05.558539143 +0000
	I1107 09:12:06.259510    9678 command_runner.go:130] > Modify: 2022-11-07 17:12:05.558539143 +0000
	I1107 09:12:06.259517    9678 command_runner.go:130] > Change: 2022-11-07 17:12:05.559539143 +0000
	I1107 09:12:06.259522    9678 command_runner.go:130] >  Birth: -
	I1107 09:12:06.259726    9678 start.go:472] Will wait 60s for crictl version
	I1107 09:12:06.259784    9678 ssh_runner.go:195] Run: sudo crictl version
	I1107 09:12:06.286394    9678 command_runner.go:130] > Version:  0.1.0
	I1107 09:12:06.286404    9678 command_runner.go:130] > RuntimeName:  docker
	I1107 09:12:06.286408    9678 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1107 09:12:06.286422    9678 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1107 09:12:06.288498    9678 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 09:12:06.288594    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:06.314661    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:06.317146    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:06.342943    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:06.391784    9678 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 09:12:06.392055    9678 cli_runner.go:164] Run: docker exec -t multinode-090641 dig +short host.docker.internal
	I1107 09:12:06.502873    9678 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:12:06.502993    9678 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:12:06.507390    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:06.516796    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:06.573067    9678 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:12:06.573154    9678 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:12:06.594834    9678 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1107 09:12:06.594846    9678 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 09:12:06.594851    9678 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1107 09:12:06.594856    9678 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1107 09:12:06.594861    9678 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1107 09:12:06.594864    9678 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1107 09:12:06.594869    9678 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1107 09:12:06.594875    9678 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1107 09:12:06.594880    9678 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1107 09:12:06.594884    9678 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 09:12:06.594888    9678 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1107 09:12:06.596992    9678 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1107 09:12:06.597006    9678 docker.go:543] Images already preloaded, skipping extraction
	I1107 09:12:06.597096    9678 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:12:06.618803    9678 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1107 09:12:06.618817    9678 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 09:12:06.618824    9678 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1107 09:12:06.618829    9678 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1107 09:12:06.618834    9678 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1107 09:12:06.618841    9678 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1107 09:12:06.618855    9678 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1107 09:12:06.618863    9678 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1107 09:12:06.618870    9678 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1107 09:12:06.618880    9678 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 09:12:06.618889    9678 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1107 09:12:06.621080    9678 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1107 09:12:06.621097    9678 cache_images.go:84] Images are preloaded, skipping loading
	I1107 09:12:06.621189    9678 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:12:06.684368    9678 command_runner.go:130] > systemd
	I1107 09:12:06.686787    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:06.686800    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:06.686816    9678 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:12:06.686837    9678 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-090641 NodeName:multinode-090641 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:12:06.686976    9678 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-090641"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:12:06.687072    9678 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-090641 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:12:06.687145    9678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 09:12:06.694056    9678 command_runner.go:130] > kubeadm
	I1107 09:12:06.694064    9678 command_runner.go:130] > kubectl
	I1107 09:12:06.694068    9678 command_runner.go:130] > kubelet
	I1107 09:12:06.694703    9678 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:12:06.694758    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 09:12:06.701988    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I1107 09:12:06.713880    9678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:12:06.726107    9678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I1107 09:12:06.738562    9678 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:12:06.742186    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:06.751571    9678 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641 for IP: 192.168.58.2
	I1107 09:12:06.751705    9678 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:12:06.751776    9678 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:12:06.751866    9678 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key
	I1107 09:12:06.751942    9678 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.key.cee25041
	I1107 09:12:06.752004    9678 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.key
	I1107 09:12:06.752013    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 09:12:06.752043    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 09:12:06.752071    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 09:12:06.752092    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 09:12:06.752113    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 09:12:06.752133    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 09:12:06.752153    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 09:12:06.752177    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 09:12:06.752283    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:12:06.752326    9678 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:12:06.752338    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:12:06.752383    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:12:06.752421    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:12:06.752456    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:12:06.752533    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:06.752567    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem -> /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.752590    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /usr/share/ca-certificates/32672.pem
	I1107 09:12:06.752611    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.753087    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 09:12:06.770020    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 09:12:06.787192    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 09:12:06.805067    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 09:12:06.821736    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:12:06.839013    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:12:06.855916    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:12:06.874527    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:12:06.891576    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:12:06.908404    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:12:06.925327    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:12:06.941704    9678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 09:12:06.954526    9678 ssh_runner.go:195] Run: openssl version
	I1107 09:12:06.959310    9678 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1107 09:12:06.959668    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:12:06.967759    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.971487    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.971596    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.971651    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.976611    9678 command_runner.go:130] > b5213941
	I1107 09:12:06.976951    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:12:06.984041    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:12:06.992152    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.996004    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.996148    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.996194    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:12:07.001104    9678 command_runner.go:130] > 51391683
	I1107 09:12:07.001463    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:12:07.008703    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:12:07.016423    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.019974    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.020000    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.020064    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.024711    9678 command_runner.go:130] > 3ec20f2e
	I1107 09:12:07.024989    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:12:07.031899    9678 kubeadm.go:396] StartCluster: {Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false porta
iner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:07.032049    9678 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:12:07.055157    9678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 09:12:07.061822    9678 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1107 09:12:07.061831    9678 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1107 09:12:07.061836    9678 command_runner.go:130] > /var/lib/minikube/etcd:
	I1107 09:12:07.061840    9678 command_runner.go:130] > member
	I1107 09:12:07.062362    9678 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 09:12:07.062374    9678 kubeadm.go:627] restartCluster start
	I1107 09:12:07.062427    9678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 09:12:07.069418    9678 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.091194    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:07.150593    9678 kubeconfig.go:135] verify returned: extract IP: "multinode-090641" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:07.150681    9678 kubeconfig.go:146] "multinode-090641" context is missing from /Users/jenkins/minikube-integration/15310-2115/kubeconfig - will repair!
	I1107 09:12:07.150938    9678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:12:07.151424    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:07.151618    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:07.151960    9678 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 09:12:07.152131    9678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 09:12:07.159928    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.159989    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.168070    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.370166    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.370344    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.381178    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.568345    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.568492    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.578934    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.770214    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.770360    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.781200    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.970207    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.970383    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.980898    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.170217    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.170466    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.181215    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.370258    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.370515    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.380803    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.568797    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.568910    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.580265    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.770278    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.770492    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.781323    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.970258    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.970430    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.981927    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.170258    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.170472    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.181873    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.370239    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.370434    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.381523    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.570271    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.570448    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.581290    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.768680    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.768909    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.779498    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.970354    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.970510    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.980796    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.170305    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:10.170516    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:10.181682    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.181694    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:10.181752    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:10.190178    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.190190    9678 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1107 09:12:10.190198    9678 kubeadm.go:1114] stopping kube-system containers ...
	I1107 09:12:10.190281    9678 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:12:10.214062    9678 command_runner.go:130] > 3bf8f38c0aaf
	I1107 09:12:10.214072    9678 command_runner.go:130] > f8d0de33debd
	I1107 09:12:10.214083    9678 command_runner.go:130] > 627de0fec15d
	I1107 09:12:10.214087    9678 command_runner.go:130] > bea8197d27a3
	I1107 09:12:10.214093    9678 command_runner.go:130] > fcff0a2c79bd
	I1107 09:12:10.214100    9678 command_runner.go:130] > 99867f7318f3
	I1107 09:12:10.214105    9678 command_runner.go:130] > e521a3a86451
	I1107 09:12:10.214109    9678 command_runner.go:130] > 2141f6b1f9b0
	I1107 09:12:10.214118    9678 command_runner.go:130] > 5d163fc295ef
	I1107 09:12:10.214122    9678 command_runner.go:130] > 6532ace61a77
	I1107 09:12:10.214126    9678 command_runner.go:130] > 309af6aa7d07
	I1107 09:12:10.214129    9678 command_runner.go:130] > 99e10e3b23a1
	I1107 09:12:10.214133    9678 command_runner.go:130] > 1244b6d56687
	I1107 09:12:10.214147    9678 command_runner.go:130] > 35468c6f4808
	I1107 09:12:10.214153    9678 command_runner.go:130] > 44f3aabcd4fb
	I1107 09:12:10.214157    9678 command_runner.go:130] > 8c41e71be632
	I1107 09:12:10.214165    9678 command_runner.go:130] > f786068f3c1d
	I1107 09:12:10.214170    9678 command_runner.go:130] > 0c7686c51f0a
	I1107 09:12:10.214174    9678 command_runner.go:130] > 23e28a639e24
	I1107 09:12:10.214179    9678 command_runner.go:130] > 24731ab856d5
	I1107 09:12:10.214182    9678 command_runner.go:130] > 08c54785a74e
	I1107 09:12:10.214187    9678 command_runner.go:130] > bce4e7cd7c5d
	I1107 09:12:10.214190    9678 command_runner.go:130] > 729a721b15ce
	I1107 09:12:10.214194    9678 command_runner.go:130] > a9009c3f6cd2
	I1107 09:12:10.214210    9678 command_runner.go:130] > 30cc23b24e38
	I1107 09:12:10.214220    9678 command_runner.go:130] > 24b8c9ce80d6
	I1107 09:12:10.214225    9678 command_runner.go:130] > e67b00fac7f3
	I1107 09:12:10.214230    9678 command_runner.go:130] > 0301a36d3c5b
	I1107 09:12:10.214234    9678 command_runner.go:130] > 1d0a21243dfd
	I1107 09:12:10.214237    9678 command_runner.go:130] > c2650598bf53
	I1107 09:12:10.214241    9678 command_runner.go:130] > ca7fb2d58b7c
	I1107 09:12:10.214244    9678 command_runner.go:130] > a63fcdcf8012
	I1107 09:12:10.216363    9678 docker.go:444] Stopping containers: [3bf8f38c0aaf f8d0de33debd 627de0fec15d bea8197d27a3 fcff0a2c79bd 99867f7318f3 e521a3a86451 2141f6b1f9b0 5d163fc295ef 6532ace61a77 309af6aa7d07 99e10e3b23a1 1244b6d56687 35468c6f4808 44f3aabcd4fb 8c41e71be632 f786068f3c1d 0c7686c51f0a 23e28a639e24 24731ab856d5 08c54785a74e bce4e7cd7c5d 729a721b15ce a9009c3f6cd2 30cc23b24e38 24b8c9ce80d6 e67b00fac7f3 0301a36d3c5b 1d0a21243dfd c2650598bf53 ca7fb2d58b7c a63fcdcf8012]
	I1107 09:12:10.216463    9678 ssh_runner.go:195] Run: docker stop 3bf8f38c0aaf f8d0de33debd 627de0fec15d bea8197d27a3 fcff0a2c79bd 99867f7318f3 e521a3a86451 2141f6b1f9b0 5d163fc295ef 6532ace61a77 309af6aa7d07 99e10e3b23a1 1244b6d56687 35468c6f4808 44f3aabcd4fb 8c41e71be632 f786068f3c1d 0c7686c51f0a 23e28a639e24 24731ab856d5 08c54785a74e bce4e7cd7c5d 729a721b15ce a9009c3f6cd2 30cc23b24e38 24b8c9ce80d6 e67b00fac7f3 0301a36d3c5b 1d0a21243dfd c2650598bf53 ca7fb2d58b7c a63fcdcf8012
	I1107 09:12:10.237555    9678 command_runner.go:130] > 3bf8f38c0aaf
	I1107 09:12:10.237586    9678 command_runner.go:130] > f8d0de33debd
	I1107 09:12:10.237595    9678 command_runner.go:130] > 627de0fec15d
	I1107 09:12:10.238078    9678 command_runner.go:130] > bea8197d27a3
	I1107 09:12:10.238088    9678 command_runner.go:130] > fcff0a2c79bd
	I1107 09:12:10.238091    9678 command_runner.go:130] > 99867f7318f3
	I1107 09:12:10.238095    9678 command_runner.go:130] > e521a3a86451
	I1107 09:12:10.238103    9678 command_runner.go:130] > 2141f6b1f9b0
	I1107 09:12:10.238109    9678 command_runner.go:130] > 5d163fc295ef
	I1107 09:12:10.238119    9678 command_runner.go:130] > 6532ace61a77
	I1107 09:12:10.238123    9678 command_runner.go:130] > 309af6aa7d07
	I1107 09:12:10.238454    9678 command_runner.go:130] > 99e10e3b23a1
	I1107 09:12:10.238464    9678 command_runner.go:130] > 1244b6d56687
	I1107 09:12:10.238470    9678 command_runner.go:130] > 35468c6f4808
	I1107 09:12:10.238476    9678 command_runner.go:130] > 44f3aabcd4fb
	I1107 09:12:10.238824    9678 command_runner.go:130] > 8c41e71be632
	I1107 09:12:10.238840    9678 command_runner.go:130] > f786068f3c1d
	I1107 09:12:10.238860    9678 command_runner.go:130] > 0c7686c51f0a
	I1107 09:12:10.238868    9678 command_runner.go:130] > 23e28a639e24
	I1107 09:12:10.238878    9678 command_runner.go:130] > 24731ab856d5
	I1107 09:12:10.238889    9678 command_runner.go:130] > 08c54785a74e
	I1107 09:12:10.238895    9678 command_runner.go:130] > bce4e7cd7c5d
	I1107 09:12:10.238901    9678 command_runner.go:130] > 729a721b15ce
	I1107 09:12:10.238906    9678 command_runner.go:130] > a9009c3f6cd2
	I1107 09:12:10.239062    9678 command_runner.go:130] > 30cc23b24e38
	I1107 09:12:10.239072    9678 command_runner.go:130] > 24b8c9ce80d6
	I1107 09:12:10.239077    9678 command_runner.go:130] > e67b00fac7f3
	I1107 09:12:10.239083    9678 command_runner.go:130] > 0301a36d3c5b
	I1107 09:12:10.239088    9678 command_runner.go:130] > 1d0a21243dfd
	I1107 09:12:10.239098    9678 command_runner.go:130] > c2650598bf53
	I1107 09:12:10.239104    9678 command_runner.go:130] > ca7fb2d58b7c
	I1107 09:12:10.239109    9678 command_runner.go:130] > a63fcdcf8012
	I1107 09:12:10.241646    9678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 09:12:10.251660    9678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:12:10.258161    9678 command_runner.go:130] > -rw------- 1 root root 5639 Nov  7 17:06 /etc/kubernetes/admin.conf
	I1107 09:12:10.258172    9678 command_runner.go:130] > -rw------- 1 root root 5652 Nov  7 17:10 /etc/kubernetes/controller-manager.conf
	I1107 09:12:10.258177    9678 command_runner.go:130] > -rw------- 1 root root 2003 Nov  7 17:07 /etc/kubernetes/kubelet.conf
	I1107 09:12:10.258183    9678 command_runner.go:130] > -rw------- 1 root root 5600 Nov  7 17:10 /etc/kubernetes/scheduler.conf
	I1107 09:12:10.258979    9678 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  7 17:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  7 17:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Nov  7 17:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  7 17:10 /etc/kubernetes/scheduler.conf
	
	I1107 09:12:10.259034    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 09:12:10.266039    9678 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1107 09:12:10.266712    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 09:12:10.273301    9678 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1107 09:12:10.273964    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 09:12:10.280757    9678 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.280827    9678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 09:12:10.287570    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 09:12:10.294481    9678 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.294540    9678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 09:12:10.301221    9678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:12:10.308491    9678 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 09:12:10.308502    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:10.349862    9678 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:12:10.350022    9678 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1107 09:12:10.350451    9678 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1107 09:12:10.350783    9678 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 09:12:10.351266    9678 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1107 09:12:10.351603    9678 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1107 09:12:10.351876    9678 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1107 09:12:10.352283    9678 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1107 09:12:10.352788    9678 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1107 09:12:10.353111    9678 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 09:12:10.353371    9678 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 09:12:10.353743    9678 command_runner.go:130] > [certs] Using the existing "sa" key
	I1107 09:12:10.356916    9678 command_runner.go:130] ! W1107 17:12:10.357189    1124 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:10.356939    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:10.397976    9678 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:12:10.443099    9678 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1107 09:12:10.788925    9678 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1107 09:12:10.882805    9678 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:12:11.046975    9678 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:12:11.050621    9678 command_runner.go:130] ! W1107 17:12:10.405606    1134 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.050643    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:11.102173    9678 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:12:11.103072    9678 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:12:11.103081    9678 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 09:12:11.176587    9678 command_runner.go:130] ! W1107 17:12:11.100178    1156 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.176610    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:11.220088    9678 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:12:11.220108    9678 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:12:11.224443    9678 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:12:11.225774    9678 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:12:11.230257    9678 command_runner.go:130] ! W1107 17:12:11.226990    1191 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.230279    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:11.306409    9678 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:12:11.312835    9678 command_runner.go:130] ! W1107 17:12:11.312909    1204 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.312878    9678 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:12:11.312965    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:11.824988    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:12.325318    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:12.823100    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:13.324050    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:13.340140    9678 command_runner.go:130] > 1791
	I1107 09:12:13.340173    9678 api_server.go:71] duration metric: took 2.027253262s to wait for apiserver process to appear ...
	I1107 09:12:13.340183    9678 api_server.go:87] waiting for apiserver healthz status ...
	I1107 09:12:13.340199    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:16.540783    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 09:12:16.540799    9678 api_server.go:102] status: https://127.0.0.1:51429/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 09:12:17.040907    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:17.046838    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:12:17.046852    9678 api_server.go:102] status: https://127.0.0.1:51429/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:12:17.541105    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:17.547168    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:12:17.547256    9678 api_server.go:102] status: https://127.0.0.1:51429/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:12:18.040932    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:18.047006    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 200:
	ok
	I1107 09:12:18.047071    9678 round_trippers.go:463] GET https://127.0.0.1:51429/version
	I1107 09:12:18.047076    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:18.047084    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:18.047091    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:18.053718    9678 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 09:12:18.053731    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:18.053738    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:18.053745    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:18.053751    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:18.053758    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:18.053763    9678 round_trippers.go:580]     Content-Length: 263
	I1107 09:12:18.053768    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:18 GMT
	I1107 09:12:18.053773    9678 round_trippers.go:580]     Audit-Id: de192e3b-6d06-4094-b260-e1922c1fe08c
	I1107 09:12:18.053795    9678 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 09:12:18.053845    9678 api_server.go:140] control plane version: v1.25.3
	I1107 09:12:18.053856    9678 api_server.go:130] duration metric: took 4.71354866s to wait for apiserver health ...
	I1107 09:12:18.053861    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:18.053866    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:18.078589    9678 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 09:12:18.114381    9678 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 09:12:18.119212    9678 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 09:12:18.119228    9678 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I1107 09:12:18.119236    9678 command_runner.go:130] > Device: 8fh/143d	Inode: 1185203     Links: 1
	I1107 09:12:18.119245    9678 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 09:12:18.119250    9678 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I1107 09:12:18.119254    9678 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I1107 09:12:18.119258    9678 command_runner.go:130] > Change: 2022-11-07 16:45:45.185426543 +0000
	I1107 09:12:18.119261    9678 command_runner.go:130] >  Birth: -
	I1107 09:12:18.119446    9678 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1107 09:12:18.119455    9678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1107 09:12:18.132480    9678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 09:12:19.009788    9678 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 09:12:19.011658    9678 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 09:12:19.013763    9678 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 09:12:19.023160    9678 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 09:12:19.089859    9678 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 09:12:19.089964    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:19.089975    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.089988    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.090002    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.096455    9678 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 09:12:19.096500    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.096518    9678 round_trippers.go:580]     Audit-Id: 397371af-91b0-4004-bfed-550f6679f948
	I1107 09:12:19.096528    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.096537    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.096546    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.096555    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.096566    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.097707    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"993"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85260 chars]
	I1107 09:12:19.100742    9678 system_pods.go:59] 12 kube-system pods found
	I1107 09:12:19.100760    9678 system_pods.go:61] "coredns-565d847f94-54csh" [6e280b18-683c-4888-93db-3756e665d1f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 09:12:19.100765    9678 system_pods.go:61] "etcd-multinode-090641" [b5cec8d5-21cb-4a1e-a05a-92b541499e1c] Running
	I1107 09:12:19.100772    9678 system_pods.go:61] "kindnet-5d6kd" [be85ead0-4248-490e-a8fc-2a92f78801f3] Running
	I1107 09:12:19.100775    9678 system_pods.go:61] "kindnet-mgtrp" [e8094b6c-54ad-4f87-aaf3-88dc5155b128] Running
	I1107 09:12:19.100778    9678 system_pods.go:61] "kindnet-nx5lb" [3021a22e-37f1-40d1-9205-1abfb03e58a9] Running
	I1107 09:12:19.100782    9678 system_pods.go:61] "kube-apiserver-multinode-090641" [3ae5af06-6458-4954-a296-a43002732bf4] Running
	I1107 09:12:19.100787    9678 system_pods.go:61] "kube-controller-manager-multinode-090641" [1c2584e6-6b2e-4c67-aea4-7c5568355345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 09:12:19.100793    9678 system_pods.go:61] "kube-proxy-hxglr" [64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff] Running
	I1107 09:12:19.100797    9678 system_pods.go:61] "kube-proxy-nwck5" [017b9de2-3593-4e50-9493-7d14c0b994ce] Running
	I1107 09:12:19.100800    9678 system_pods.go:61] "kube-proxy-rqnqb" [f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d] Running
	I1107 09:12:19.100804    9678 system_pods.go:61] "kube-scheduler-multinode-090641" [76a48883-135f-49f5-831d-d0182408b2ca] Running
	I1107 09:12:19.100808    9678 system_pods.go:61] "storage-provisioner" [29595449-7701-47e2-af62-0638177bb673] Running
	I1107 09:12:19.100811    9678 system_pods.go:74] duration metric: took 10.937241ms to wait for pod list to return data ...
	I1107 09:12:19.100816    9678 node_conditions.go:102] verifying NodePressure condition ...
	I1107 09:12:19.100862    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes
	I1107 09:12:19.100866    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.100873    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.100878    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.103251    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.103263    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.103269    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.103273    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.103278    9678 round_trippers.go:580]     Audit-Id: 8af27a4b-79fa-40c0-b790-a04e27530aa3
	I1107 09:12:19.103283    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.103287    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.103293    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.103379    9678 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"994"},"items":[{"metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10902 chars]
	I1107 09:12:19.103882    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:19.103898    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:19.103911    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:19.103915    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:19.103918    9678 node_conditions.go:105] duration metric: took 3.099375ms to run NodePressure ...
	I1107 09:12:19.103932    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:19.312337    9678 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1107 09:12:19.392061    9678 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1107 09:12:19.396216    9678 command_runner.go:130] ! W1107 17:12:19.214687    2560 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:19.396237    9678 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 09:12:19.396291    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1107 09:12:19.396297    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.396305    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.396314    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.400258    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:19.400277    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.400285    9678 round_trippers.go:580]     Audit-Id: cd3d56be-acf4-46f4-9dd3-30f04e0291c6
	I1107 09:12:19.400292    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.400298    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.400303    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.400307    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.400313    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.400543    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"999"},"items":[{"metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"781","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30654 chars]
	I1107 09:12:19.401480    9678 kubeadm.go:778] kubelet initialised
	I1107 09:12:19.401491    9678 kubeadm.go:779] duration metric: took 5.24592ms waiting for restarted kubelet to initialise ...
	I1107 09:12:19.401499    9678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:19.401545    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:19.401552    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.401561    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.401570    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.405641    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:19.405657    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.405666    9678 round_trippers.go:580]     Audit-Id: 6eb9d3d7-d64b-4d3d-aac4-59340e60c1b3
	I1107 09:12:19.405673    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.405680    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.405688    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.405717    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.405730    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.406801    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"999"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85452 chars]
	I1107 09:12:19.408982    9678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:19.409025    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:19.409030    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.409037    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.409045    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.411672    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.411687    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.411695    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.411701    9678 round_trippers.go:580]     Audit-Id: d001b521-c6a1-48d1-9ae5-0ec8d5c1f79a
	I1107 09:12:19.411707    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.411712    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.411717    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.411723    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.411805    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:19.412137    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:19.412145    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.412151    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.412158    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.414552    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.414572    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.414583    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.414594    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.414605    9678 round_trippers.go:580]     Audit-Id: aa418369-6369-4ab3-87b0-9ac47bbc2ba9
	I1107 09:12:19.414615    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.414624    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.414659    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.414736    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:19.917252    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:19.917278    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.917291    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.917301    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.921124    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:19.921140    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.921148    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.921156    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.921182    9678 round_trippers.go:580]     Audit-Id: eae88b3f-c227-4d63-946f-7e31c757ebec
	I1107 09:12:19.921196    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.921206    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.921212    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.921320    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:19.921697    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:19.921710    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.921718    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.921725    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.923803    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.923811    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.923817    9678 round_trippers.go:580]     Audit-Id: 05634058-30e3-4919-bcd9-b392654e6282
	I1107 09:12:19.923821    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.923827    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.923831    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.923835    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.923840    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.923888    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:20.415402    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:20.415415    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.415421    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.415430    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.417606    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:20.417616    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.417621    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.417626    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.417631    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.417636    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.417640    9678 round_trippers.go:580]     Audit-Id: 2f35f98d-5b52-4d56-aa92-ab95ed7e61eb
	I1107 09:12:20.417647    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.418028    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:20.418323    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:20.418330    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.418336    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.418355    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.420570    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:20.420579    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.420585    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.420590    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.420597    9678 round_trippers.go:580]     Audit-Id: b6a2acec-7621-4852-bf73-6c6a2d52568a
	I1107 09:12:20.420602    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.420607    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.420611    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.420656    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:20.917245    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:20.917271    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.917283    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.917293    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.921138    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:20.921156    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.921167    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.921177    9678 round_trippers.go:580]     Audit-Id: 26be362b-6659-4783-9fff-92f3f159340b
	I1107 09:12:20.921185    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.921197    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.921204    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.921228    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.921446    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:20.921841    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:20.921847    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.921853    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.921859    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.923740    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:20.923750    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.923755    9678 round_trippers.go:580]     Audit-Id: 7aa08aa7-03ba-473f-abc0-8854dee983bb
	I1107 09:12:20.923760    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.923769    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.923774    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.923779    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.923784    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.923833    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:21.415374    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:21.415392    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.415400    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.415405    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.418316    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:21.418331    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.418338    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.418343    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.418347    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.418358    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.418366    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.418373    9678 round_trippers.go:580]     Audit-Id: a12fab9f-22ee-4c4e-b224-cebdf2b59223
	I1107 09:12:21.418468    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:21.418818    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:21.418826    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.418833    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.418838    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.421497    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:21.421508    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.421513    9678 round_trippers.go:580]     Audit-Id: 3334edd7-34d9-483d-9e19-56a34b7eb4b0
	I1107 09:12:21.421518    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.421523    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.421528    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.421535    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.421541    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.421778    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:21.421988    9678 pod_ready.go:102] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"False"
	I1107 09:12:21.917345    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:21.917366    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.917379    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.917390    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.921254    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:21.921271    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.921279    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.921287    9678 round_trippers.go:580]     Audit-Id: 1f11a7b5-2751-4f8f-92e0-50d3e71aedcd
	I1107 09:12:21.921294    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.921301    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.921314    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.921326    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.921426    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:21.921809    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:21.921818    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.921843    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.921849    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.924169    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:21.924179    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.924184    9678 round_trippers.go:580]     Audit-Id: b9ec301c-f596-49b5-951f-3c1ec88b77cd
	I1107 09:12:21.924191    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.924199    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.924204    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.924208    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.924214    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.924375    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:22.417053    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:22.417073    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.417087    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.417109    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.420912    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:22.420925    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.420932    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.420939    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.420946    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.420953    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.420961    9678 round_trippers.go:580]     Audit-Id: b9e74233-b2b2-4d47-a543-3c04d87aa9da
	I1107 09:12:22.420968    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.421039    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:22.421370    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:22.421377    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.421383    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.421388    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.423284    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:22.423293    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.423299    9678 round_trippers.go:580]     Audit-Id: 8d59a573-b0c5-4909-a555-5bff75e067f6
	I1107 09:12:22.423304    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.423309    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.423315    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.423319    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.423324    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.423369    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:22.915365    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:22.915387    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.915399    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.915409    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.919071    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:22.919081    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.919086    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.919091    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.919096    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.919113    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.919123    9678 round_trippers.go:580]     Audit-Id: 7460eae7-03b4-4928-a838-088790c91139
	I1107 09:12:22.919128    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.919288    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:22.919575    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:22.919581    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.919587    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.919594    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.921408    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:22.921416    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.921421    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.921426    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.921432    9678 round_trippers.go:580]     Audit-Id: de7b3a57-b001-4baf-9573-29142384cfd2
	I1107 09:12:22.921436    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.921441    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.921446    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.921780    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:23.415241    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:23.415253    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.415263    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.415270    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.418664    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:23.418677    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.418684    9678 round_trippers.go:580]     Audit-Id: 23b1a853-521a-48e2-a07e-866fea668734
	I1107 09:12:23.418689    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.418694    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.418698    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.418704    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.418708    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.418775    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:23.419079    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:23.419085    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.419092    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.419097    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.420890    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:23.420900    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.420905    9678 round_trippers.go:580]     Audit-Id: a8ae45f5-8573-4d36-b67d-30d2a1a59c10
	I1107 09:12:23.420910    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.420916    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.420920    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.420925    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.420930    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.420971    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:23.915525    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:23.915548    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.915561    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.915571    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.919402    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:23.919420    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.919444    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.919449    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.919454    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.919462    9678 round_trippers.go:580]     Audit-Id: f4220a37-7692-4e36-8e45-73a04b227e34
	I1107 09:12:23.919468    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.919474    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.919543    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:23.919827    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:23.919833    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.919839    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.919844    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.921863    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:23.921873    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.921883    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.921888    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.921892    9678 round_trippers.go:580]     Audit-Id: b7206d8e-45bb-4234-800a-b106be45c3a6
	I1107 09:12:23.921897    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.921902    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.921907    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.922028    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:23.922221    9678 pod_ready.go:102] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"False"
	I1107 09:12:24.417350    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:24.417371    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.417383    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.417392    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.421167    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:24.421182    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.421190    9678 round_trippers.go:580]     Audit-Id: 12cc1d43-769c-4c64-bc2f-d6f89bc7fafb
	I1107 09:12:24.421197    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.421206    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.421212    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.421219    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.421225    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.421309    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:24.421704    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:24.421710    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.421717    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.421723    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.423518    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:24.423528    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.423533    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.423538    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.423543    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.423548    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.423553    9678 round_trippers.go:580]     Audit-Id: 1b8cc09e-65e5-45b3-83eb-ca889abdf9a4
	I1107 09:12:24.423557    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.423793    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:24.917447    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:24.917471    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.917485    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.917495    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.921259    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:24.921274    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.921290    9678 round_trippers.go:580]     Audit-Id: 0e8ed72a-9c08-4be2-b146-c879b3c5a1df
	I1107 09:12:24.921298    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.921304    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.921310    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.921316    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.921326    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.921811    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:24.922113    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:24.922120    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.922126    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.922131    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.924216    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:24.924226    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.924232    9678 round_trippers.go:580]     Audit-Id: 2cb82c29-1f5d-4fe2-8b27-b9c39d44afc0
	I1107 09:12:24.924237    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.924243    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.924255    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.924261    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.924266    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.924317    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:25.415277    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:25.415291    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.415298    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.415303    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.417529    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:25.417539    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.417546    9678 round_trippers.go:580]     Audit-Id: 3fc735f4-2f6c-4700-a551-fecb424ae2be
	I1107 09:12:25.417550    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.417556    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.417560    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.417567    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.417572    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.417631    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6551 chars]
	I1107 09:12:25.417919    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:25.417926    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.417932    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.417938    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.420138    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:25.420147    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.420153    9678 round_trippers.go:580]     Audit-Id: bd8c627c-6526-421a-b738-46ce8b090881
	I1107 09:12:25.420157    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.420163    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.420167    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.420171    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.420177    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.420375    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:25.420559    9678 pod_ready.go:92] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:25.420570    9678 pod_ready.go:81] duration metric: took 6.011424402s waiting for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:25.420578    9678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:25.420607    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:25.420611    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.420617    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.420623    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.422475    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:25.422487    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.422495    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.422504    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.422511    9678 round_trippers.go:580]     Audit-Id: c7f22e37-9721-4e63-bcf8-beac26a24639
	I1107 09:12:25.422518    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.422523    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.422530    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.422725    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:25.422989    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:25.422996    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.423001    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.423007    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.424720    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:25.424734    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.424751    9678 round_trippers.go:580]     Audit-Id: 2c3af5b2-ca15-47d5-9892-eef03b298c72
	I1107 09:12:25.424759    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.424764    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.424769    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.424774    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.424779    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.424835    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:25.927364    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:25.927387    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.927400    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.927411    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.931228    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:25.931242    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.931250    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.931257    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.931263    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.931270    9678 round_trippers.go:580]     Audit-Id: a4d78318-c992-4b1e-9169-8115b9b65794
	I1107 09:12:25.931276    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.931282    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.931380    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:25.931710    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:25.931719    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.931727    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.931735    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.933528    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:25.933536    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.933541    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.933546    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.933550    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.933557    9678 round_trippers.go:580]     Audit-Id: 72a8439c-7b5d-4bc5-80b2-42653ce9383e
	I1107 09:12:25.933562    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.933567    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.933603    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:26.426051    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:26.426077    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.426181    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.426192    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.430111    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:26.430126    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.430134    9678 round_trippers.go:580]     Audit-Id: 2e0c7db8-4251-44c3-8c66-acea90823aa8
	I1107 09:12:26.430146    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.430154    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.430164    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.430173    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.430180    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.430245    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:26.430579    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:26.430585    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.430591    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.430596    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.432361    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:26.432372    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.432380    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.432389    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.432398    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.432404    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.432420    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.432427    9678 round_trippers.go:580]     Audit-Id: a6b34a2d-3e68-4129-b863-20ec047a2aee
	I1107 09:12:26.432661    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:26.927346    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:26.927370    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.927383    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.927393    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.931132    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:26.931147    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.931156    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.931164    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.931171    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.931180    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.931187    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.931194    9678 round_trippers.go:580]     Audit-Id: dc7d4c62-6fe6-413c-8837-7d843adbc531
	I1107 09:12:26.931265    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:26.931596    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:26.931605    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.931613    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.931620    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.933521    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:26.933530    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.933535    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.933541    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.933546    9678 round_trippers.go:580]     Audit-Id: b5fa065e-90e7-4580-a904-3dbe40fd407a
	I1107 09:12:26.933551    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.933555    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.933559    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.933714    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:27.427273    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:27.427299    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.427311    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.427321    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.431209    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:27.431225    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.431233    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.431240    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.431246    9678 round_trippers.go:580]     Audit-Id: cf647470-e446-4afb-a4d7-66ffba7f07e0
	I1107 09:12:27.431253    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.431260    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.431267    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.431329    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:27.432257    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:27.432270    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.432283    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.432295    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.434423    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:27.434433    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.434439    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.434444    9678 round_trippers.go:580]     Audit-Id: 679269e4-13ca-4601-8cd9-10e2eb1c6dbd
	I1107 09:12:27.434449    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.434453    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.434458    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.434462    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.434506    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:27.434684    9678 pod_ready.go:102] pod "etcd-multinode-090641" in "kube-system" namespace has status "Ready":"False"
	I1107 09:12:27.925431    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:27.925452    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.925465    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.925474    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.928902    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:27.928916    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.928922    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.928928    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.928932    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.928937    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.928942    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.928947    9678 round_trippers.go:580]     Audit-Id: 4926074e-7742-4b12-bea4-be0a73066e84
	I1107 09:12:27.929003    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:27.929262    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:27.929270    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.929276    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.929281    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.931160    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:27.931170    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.931176    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.931181    9678 round_trippers.go:580]     Audit-Id: 26a82c72-96ad-4070-9efe-a104e35e8c53
	I1107 09:12:27.931187    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.931192    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.931197    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.931201    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.931249    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:28.427433    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:28.427455    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.427467    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.427477    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.431294    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:28.431309    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.431317    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.431324    9678 round_trippers.go:580]     Audit-Id: b8ff3c0c-6e09-4d71-8f00-e7e9c500addf
	I1107 09:12:28.431331    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.431338    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.431345    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.431351    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.431412    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:28.431732    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:28.431738    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.431744    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.431761    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.433544    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:28.433553    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.433559    9678 round_trippers.go:580]     Audit-Id: 3449e1d9-2deb-4d08-b909-346811b9b8c8
	I1107 09:12:28.433565    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.433570    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.433574    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.433579    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.433585    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.433619    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:28.927318    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:28.927344    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.927361    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.927465    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.930854    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:28.930871    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.930882    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.930893    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.930914    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.930926    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.930945    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.930969    9678 round_trippers.go:580]     Audit-Id: e085b733-447d-4aed-aa89-68068a2d6652
	I1107 09:12:28.931265    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:28.931606    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:28.931629    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.931635    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.931641    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.933268    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:28.933278    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.933283    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.933288    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.933294    9678 round_trippers.go:580]     Audit-Id: cf0d4e81-8f3b-4bf2-8c06-bad1dd239f3e
	I1107 09:12:28.933314    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.933326    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.933334    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.933576    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.425201    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:29.425217    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.425224    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.425230    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.427952    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.427964    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.427970    9678 round_trippers.go:580]     Audit-Id: 6b74d881-e10c-4e57-bd17-881874e0f504
	I1107 09:12:29.427981    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.427987    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.427991    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.427996    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.428000    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.428051    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1070","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6044 chars]
	I1107 09:12:29.428305    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.428312    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.428318    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.428324    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.430023    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.430034    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.430042    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.430047    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.430053    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.430058    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.430064    9678 round_trippers.go:580]     Audit-Id: 0260119d-7e64-4e91-8266-b7275e9fe925
	I1107 09:12:29.430069    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.430349    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.430565    9678 pod_ready.go:92] pod "etcd-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.430576    9678 pod_ready.go:81] duration metric: took 4.009889972s waiting for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.430587    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.430616    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-090641
	I1107 09:12:29.430621    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.430627    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.430632    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.432726    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.432736    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.432742    9678 round_trippers.go:580]     Audit-Id: b226ecbb-f322-481d-9d3c-9d1a0d8a2bb9
	I1107 09:12:29.432747    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.432752    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.432757    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.432763    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.432769    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.432823    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-090641","namespace":"kube-system","uid":"3ae5af06-6458-4954-a296-a43002732bf4","resourceVersion":"1035","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.mirror":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853016Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8430 chars]
	I1107 09:12:29.433074    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.433080    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.433086    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.433091    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.435372    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.435382    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.435387    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.435392    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.435398    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.435404    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.435411    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.435418    9678 round_trippers.go:580]     Audit-Id: f216cb37-26d7-47be-9a9d-39d2d66c3533
	I1107 09:12:29.435468    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.435659    9678 pod_ready.go:92] pod "kube-apiserver-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.435666    9678 pod_ready.go:81] duration metric: took 5.073553ms waiting for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.435672    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.435697    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-090641
	I1107 09:12:29.435703    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.435710    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.435717    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.437746    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.437757    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.437764    9678 round_trippers.go:580]     Audit-Id: 4eabb133-e54d-41fe-8d32-a4ba700ef567
	I1107 09:12:29.437772    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.437778    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.437785    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.437792    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.437799    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.438159    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-090641","namespace":"kube-system","uid":"1c2584e6-6b2e-4c67-aea4-7c5568355345","resourceVersion":"1050","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.mirror":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1107 09:12:29.438431    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.438438    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.438446    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.438452    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.440250    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.440258    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.440264    9678 round_trippers.go:580]     Audit-Id: ec203f5c-6933-4cc2-8b06-1ff229bc07f8
	I1107 09:12:29.440268    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.440273    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.440278    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.440284    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.440295    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.440336    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.440508    9678 pod_ready.go:92] pod "kube-controller-manager-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.440515    9678 pod_ready.go:81] duration metric: took 4.838656ms waiting for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.440522    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.440546    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-hxglr
	I1107 09:12:29.440550    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.440556    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.440562    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.442489    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.442498    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.442503    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.442509    9678 round_trippers.go:580]     Audit-Id: 90d2d8d2-bfb7-4ef4-b22a-a44928524ec6
	I1107 09:12:29.442514    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.442519    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.442524    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.442529    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.442575    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff","resourceVersion":"846","creationTimestamp":"2022-11-07T17:07:43Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1107 09:12:29.442792    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:29.442798    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.442804    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.442810    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.444671    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.444681    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.444686    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.444691    9678 round_trippers.go:580]     Audit-Id: 16cb5476-f8e9-44ae-ab6f-dfed0eca938b
	I1107 09:12:29.444696    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.444701    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.444707    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.444711    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.445062    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641-m02","uid":"1d15109e-f0b8-4c7f-a0d6-c4e58cde1a91","resourceVersion":"858","creationTimestamp":"2022-11-07T17:10:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1107 09:12:29.445219    9678 pod_ready.go:92] pod "kube-proxy-hxglr" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.445225    9678 pod_ready.go:81] duration metric: took 4.69866ms waiting for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.445230    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.445254    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-nwck5
	I1107 09:12:29.445259    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.445264    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.445270    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.447018    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.447027    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.447033    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.447039    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.447044    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.447048    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.447053    9678 round_trippers.go:580]     Audit-Id: 5ea3ae87-f747-4cc9-bb88-c02db4498193
	I1107 09:12:29.447057    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.447242    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nwck5","generateName":"kube-proxy-","namespace":"kube-system","uid":"017b9de2-3593-4e50-9493-7d14c0b994ce","resourceVersion":"945","creationTimestamp":"2022-11-07T17:08:26Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1107 09:12:29.447466    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m03
	I1107 09:12:29.447472    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.447478    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.447484    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.449006    9678 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1107 09:12:29.449014    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.449019    9678 round_trippers.go:580]     Audit-Id: 52b68d52-8c1b-4bc4-ba13-b914a3312665
	I1107 09:12:29.449023    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.449028    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.449033    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.449039    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.449043    9678 round_trippers.go:580]     Content-Length: 210
	I1107 09:12:29.449048    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.449057    9678 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-090641-m03\" not found","reason":"NotFound","details":{"name":"multinode-090641-m03","kind":"nodes"},"code":404}
	I1107 09:12:29.449156    9678 pod_ready.go:97] node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:29.449163    9678 pod_ready.go:81] duration metric: took 3.928728ms waiting for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	E1107 09:12:29.449168    9678 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:29.449174    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.626425    9678 request.go:614] Waited for 177.201575ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:29.626509    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:29.626519    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.626532    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.626542    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.630543    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:29.630558    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.630566    9678 round_trippers.go:580]     Audit-Id: 4c2b2093-b33e-48d5-b3b9-3d2a6264b1cc
	I1107 09:12:29.630572    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.630579    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.630586    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.630592    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.630599    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.630677    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rqnqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d","resourceVersion":"1029","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1107 09:12:29.826052    9678 request.go:614] Waited for 195.027314ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.826095    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.826104    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.826113    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.826121    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.828657    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.828668    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.828673    9678 round_trippers.go:580]     Audit-Id: bf3ef967-03df-401a-b943-c3c1834e40e8
	I1107 09:12:29.828680    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.828686    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.828691    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.828696    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.828701    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.828830    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.829027    9678 pod_ready.go:92] pod "kube-proxy-rqnqb" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.829034    9678 pod_ready.go:81] duration metric: took 379.845663ms waiting for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.829040    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:30.025357    9678 request.go:614] Waited for 196.266866ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:30.025493    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:30.025505    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.025517    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.025529    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.028893    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:30.028907    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.028916    9678 round_trippers.go:580]     Audit-Id: 79c57695-7aec-4ff0-b532-44532c205ce1
	I1107 09:12:30.028922    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.028930    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.028936    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.028942    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.028949    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.029017    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-090641","namespace":"kube-system","uid":"76a48883-135f-49f5-831d-d0182408b2ca","resourceVersion":"1041","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.mirror":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.seen":"2022-11-07T17:07:07.141854549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1107 09:12:30.225437    9678 request.go:614] Waited for 196.110666ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:30.225548    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:30.225559    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.225571    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.225581    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.229341    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:30.229357    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.229364    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.229373    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.229382    9678 round_trippers.go:580]     Audit-Id: 7ad65763-80b8-44e7-a4d6-ab7e5b6d03cc
	I1107 09:12:30.229390    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.229396    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.229402    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.229558    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:30.229791    9678 pod_ready.go:92] pod "kube-scheduler-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:30.229798    9678 pod_ready.go:81] duration metric: took 400.743057ms waiting for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:30.229804    9678 pod_ready.go:38] duration metric: took 10.828022896s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:30.229817    9678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 09:12:30.237682    9678 command_runner.go:130] > -16
	I1107 09:12:30.237696    9678 ops.go:34] apiserver oom_adj: -16
	I1107 09:12:30.237701    9678 kubeadm.go:631] restartCluster took 23.17473863s
	I1107 09:12:30.237709    9678 kubeadm.go:398] StartCluster complete in 23.205230176s
	I1107 09:12:30.237721    9678 settings.go:142] acquiring lock: {Name:mkacd69bfe5f4d7bab8b044c0ff487fe5c3f0cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:12:30.237811    9678 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:30.238187    9678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:12:30.238807    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:30.239001    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:30.239190    9678 round_trippers.go:463] GET https://127.0.0.1:51429/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 09:12:30.239196    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.239202    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.239208    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.241483    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:30.241492    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.241498    9678 round_trippers.go:580]     Content-Length: 292
	I1107 09:12:30.241505    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.241511    9678 round_trippers.go:580]     Audit-Id: 6260f10a-6484-4908-b402-a5848d4dfefa
	I1107 09:12:30.241515    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.241521    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.241525    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.241531    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.241542    9678 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc5e3846-355c-4569-b48e-ed482b8ae45b","resourceVersion":"1078","creationTimestamp":"2022-11-07T17:07:07Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 09:12:30.241613    9678 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-090641" rescaled to 1
	I1107 09:12:30.241643    9678 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 09:12:30.241662    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 09:12:30.241690    9678 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1107 09:12:30.241811    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:30.283890    9678 addons.go:65] Setting storage-provisioner=true in profile "multinode-090641"
	I1107 09:12:30.283891    9678 addons.go:65] Setting default-storageclass=true in profile "multinode-090641"
	I1107 09:12:30.283754    9678 out.go:177] * Verifying Kubernetes components...
	I1107 09:12:30.283940    9678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-090641"
	I1107 09:12:30.283939    9678 addons.go:227] Setting addon storage-provisioner=true in "multinode-090641"
	W1107 09:12:30.305413    9678 addons.go:236] addon storage-provisioner should already be in state true
	I1107 09:12:30.305430    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:12:30.305478    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:30.305718    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:30.305827    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:30.352382    9678 command_runner.go:130] > apiVersion: v1
	I1107 09:12:30.352405    9678 command_runner.go:130] > data:
	I1107 09:12:30.352410    9678 command_runner.go:130] >   Corefile: |
	I1107 09:12:30.352413    9678 command_runner.go:130] >     .:53 {
	I1107 09:12:30.352417    9678 command_runner.go:130] >         errors
	I1107 09:12:30.352425    9678 command_runner.go:130] >         health {
	I1107 09:12:30.352433    9678 command_runner.go:130] >            lameduck 5s
	I1107 09:12:30.352438    9678 command_runner.go:130] >         }
	I1107 09:12:30.352443    9678 command_runner.go:130] >         ready
	I1107 09:12:30.352457    9678 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1107 09:12:30.352465    9678 command_runner.go:130] >            pods insecure
	I1107 09:12:30.352474    9678 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1107 09:12:30.352484    9678 command_runner.go:130] >            ttl 30
	I1107 09:12:30.352488    9678 command_runner.go:130] >         }
	I1107 09:12:30.352492    9678 command_runner.go:130] >         prometheus :9153
	I1107 09:12:30.352499    9678 command_runner.go:130] >         hosts {
	I1107 09:12:30.352508    9678 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I1107 09:12:30.352514    9678 command_runner.go:130] >            fallthrough
	I1107 09:12:30.352526    9678 command_runner.go:130] >         }
	I1107 09:12:30.352536    9678 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1107 09:12:30.352542    9678 command_runner.go:130] >            max_concurrent 1000
	I1107 09:12:30.352546    9678 command_runner.go:130] >         }
	I1107 09:12:30.352550    9678 command_runner.go:130] >         cache 30
	I1107 09:12:30.352553    9678 command_runner.go:130] >         loop
	I1107 09:12:30.352561    9678 command_runner.go:130] >         reload
	I1107 09:12:30.352568    9678 command_runner.go:130] >         loadbalance
	I1107 09:12:30.352572    9678 command_runner.go:130] >     }
	I1107 09:12:30.352576    9678 command_runner.go:130] > kind: ConfigMap
	I1107 09:12:30.352582    9678 command_runner.go:130] > metadata:
	I1107 09:12:30.352586    9678 command_runner.go:130] >   creationTimestamp: "2022-11-07T17:07:07Z"
	I1107 09:12:30.352592    9678 command_runner.go:130] >   name: coredns
	I1107 09:12:30.352596    9678 command_runner.go:130] >   namespace: kube-system
	I1107 09:12:30.352601    9678 command_runner.go:130] >   resourceVersion: "365"
	I1107 09:12:30.352608    9678 command_runner.go:130] >   uid: 8d2177f7-228b-4356-ad32-03ee101d8c94
	I1107 09:12:30.352714    9678 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 09:12:30.352829    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:30.369035    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:30.390272    9678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 09:12:30.390547    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:30.411365    9678 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 09:12:30.411384    9678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 09:12:30.411537    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:30.411758    9678 round_trippers.go:463] GET https://127.0.0.1:51429/apis/storage.k8s.io/v1/storageclasses
	I1107 09:12:30.411776    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.411789    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.412642    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.416093    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:30.416107    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.416113    9678 round_trippers.go:580]     Audit-Id: 7845e956-c9ff-4f97-8e84-cbaf393626c6
	I1107 09:12:30.416118    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.416122    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.416130    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.416136    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.416140    9678 round_trippers.go:580]     Content-Length: 1274
	I1107 09:12:30.416145    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.416196    9678 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"standard","uid":"3f887db3-d96f-4dbf-948f-c470c5720b23","resourceVersion":"378","creationTimestamp":"2022-11-07T17:07:21Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-07T17:07:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I1107 09:12:30.416657    9678 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3f887db3-d96f-4dbf-948f-c470c5720b23","resourceVersion":"378","creationTimestamp":"2022-11-07T17:07:21Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-07T17:07:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 09:12:30.416694    9678 round_trippers.go:463] PUT https://127.0.0.1:51429/apis/storage.k8s.io/v1/storageclasses/standard
	I1107 09:12:30.416699    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.416706    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.416711    9678 round_trippers.go:473]     Content-Type: application/json
	I1107 09:12:30.416716    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.421027    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:30.421046    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.421053    9678 round_trippers.go:580]     Content-Length: 1220
	I1107 09:12:30.421058    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.421063    9678 round_trippers.go:580]     Audit-Id: 27eaa177-dbd5-4d6b-a7a8-3e55d1f8b1cb
	I1107 09:12:30.421068    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.421073    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.421077    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.421082    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.421101    9678 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3f887db3-d96f-4dbf-948f-c470c5720b23","resourceVersion":"378","creationTimestamp":"2022-11-07T17:07:21Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-07T17:07:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 09:12:30.421172    9678 addons.go:227] Setting addon default-storageclass=true in "multinode-090641"
	W1107 09:12:30.421180    9678 addons.go:236] addon default-storageclass should already be in state true
	I1107 09:12:30.421195    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:30.421573    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:30.423210    9678 node_ready.go:35] waiting up to 6m0s for node "multinode-090641" to be "Ready" ...
	I1107 09:12:30.425442    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:30.425450    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.425457    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.425462    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.427943    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:30.427962    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.427976    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.427985    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.427990    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.427995    9678 round_trippers.go:580]     Audit-Id: 9c5de483-8509-419e-aebd-205ae798f5b1
	I1107 09:12:30.428000    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.428005    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.428068    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:30.428346    9678 node_ready.go:49] node "multinode-090641" has status "Ready":"True"
	I1107 09:12:30.428355    9678 node_ready.go:38] duration metric: took 5.124126ms waiting for node "multinode-090641" to be "Ready" ...
	I1107 09:12:30.428365    9678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:30.473525    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:30.480958    9678 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 09:12:30.480969    9678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 09:12:30.481050    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:30.537952    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:30.564924    9678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 09:12:30.625801    9678 request.go:614] Waited for 197.384346ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:30.625851    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:30.625857    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.625863    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.625870    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.628654    9678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 09:12:30.630227    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:30.630241    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.630247    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.630255    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.630262    9678 round_trippers.go:580]     Audit-Id: b2b99a63-956c-425d-8272-25441cdb5be8
	I1107 09:12:30.630268    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.630273    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.630278    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.631727    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1107 09:12:30.634066    9678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:30.727528    9678 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1107 09:12:30.729314    9678 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1107 09:12:30.731810    9678 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1107 09:12:30.733453    9678 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1107 09:12:30.735252    9678 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1107 09:12:30.776061    9678 command_runner.go:130] > pod/storage-provisioner configured
	I1107 09:12:30.825343    9678 request.go:614] Waited for 191.221708ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:30.825386    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:30.825392    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.825410    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.825420    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.828048    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:30.828062    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.828068    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.828072    9678 round_trippers.go:580]     Audit-Id: 520a42f8-1e55-4ed3-800a-4a7ec27cdf34
	I1107 09:12:30.828078    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.828082    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.828087    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.828092    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.828156    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6551 chars]
	I1107 09:12:30.834978    9678 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1107 09:12:30.884287    9678 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 09:12:30.905405    9678 addons.go:488] enableAddons completed in 663.696406ms
	I1107 09:12:31.025727    9678 request.go:614] Waited for 197.257088ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.025827    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.025837    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.025850    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.025863    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.029840    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.029855    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.029864    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.029877    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.029893    9678 round_trippers.go:580]     Audit-Id: ce2e8a40-75ad-43aa-b6e8-97e16cd4546e
	I1107 09:12:31.029900    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.029907    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.029913    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.029983    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:31.030247    9678 pod_ready.go:92] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:31.030254    9678 pod_ready.go:81] duration metric: took 396.163978ms waiting for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.030260    9678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.225645    9678 request.go:614] Waited for 195.273388ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:31.225703    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:31.225715    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.225727    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.225738    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.229467    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.229483    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.229491    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.229497    9678 round_trippers.go:580]     Audit-Id: 17435435-36d3-4071-b118-105361a31f15
	I1107 09:12:31.229503    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.229514    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.229523    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.229530    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.229834    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1070","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6044 chars]
	I1107 09:12:31.425335    9678 request.go:614] Waited for 195.124509ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.425366    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.425372    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.425379    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.425384    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.427921    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:31.427934    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.427940    9678 round_trippers.go:580]     Audit-Id: 3cfb11ed-2642-4ee3-b45b-37800f1f820c
	I1107 09:12:31.427949    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.427955    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.427959    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.427964    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.427969    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.428115    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:31.428331    9678 pod_ready.go:92] pod "etcd-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:31.428339    9678 pod_ready.go:81] duration metric: took 398.064619ms waiting for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.428349    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.627387    9678 request.go:614] Waited for 198.955792ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-090641
	I1107 09:12:31.627539    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-090641
	I1107 09:12:31.627552    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.627565    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.627575    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.631493    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.631511    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.631520    9678 round_trippers.go:580]     Audit-Id: aa3a36f2-a38b-4385-a40c-1454d9b56f21
	I1107 09:12:31.631527    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.631556    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.631567    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.631577    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.631584    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.631658    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-090641","namespace":"kube-system","uid":"3ae5af06-6458-4954-a296-a43002732bf4","resourceVersion":"1035","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.mirror":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853016Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8430 chars]
	I1107 09:12:31.827031    9678 request.go:614] Waited for 195.035201ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.827138    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.827147    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.827162    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.827180    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.831080    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.831091    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.831097    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.831101    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.831108    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.831113    9678 round_trippers.go:580]     Audit-Id: bbf099a6-fa76-417b-a18d-b655d18e0892
	I1107 09:12:31.831119    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.831123    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.831170    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:31.831383    9678 pod_ready.go:92] pod "kube-apiserver-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:31.831391    9678 pod_ready.go:81] duration metric: took 403.026408ms waiting for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.831398    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.027368    9678 request.go:614] Waited for 195.899398ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-090641
	I1107 09:12:32.027544    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-090641
	I1107 09:12:32.027555    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.027566    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.027576    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.031473    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.031488    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.031495    9678 round_trippers.go:580]     Audit-Id: 7d17e35e-e8a9-405b-bc6a-4947c0070abf
	I1107 09:12:32.031502    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.031508    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.031516    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.031531    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.031539    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.031908    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-090641","namespace":"kube-system","uid":"1c2584e6-6b2e-4c67-aea4-7c5568355345","resourceVersion":"1050","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.mirror":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1107 09:12:32.226376    9678 request.go:614] Waited for 194.121304ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:32.226422    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:32.226430    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.226442    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.226465    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.230349    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.230365    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.230373    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.230380    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.230387    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.230397    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.230404    9678 round_trippers.go:580]     Audit-Id: b60ecc82-8cad-49e5-aac6-3ee492825d0a
	I1107 09:12:32.230411    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.230485    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:32.230757    9678 pod_ready.go:92] pod "kube-controller-manager-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:32.230763    9678 pod_ready.go:81] duration metric: took 399.350714ms waiting for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.230770    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.427373    9678 request.go:614] Waited for 196.546616ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-hxglr
	I1107 09:12:32.427587    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-hxglr
	I1107 09:12:32.427602    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.427619    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.427647    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.431370    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.431385    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.431393    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.431399    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.431406    9678 round_trippers.go:580]     Audit-Id: e54da4ac-f9eb-45fa-9653-c35eb6cb3b42
	I1107 09:12:32.431413    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.431419    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.431426    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.431502    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff","resourceVersion":"846","creationTimestamp":"2022-11-07T17:07:43Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1107 09:12:32.625358    9678 request.go:614] Waited for 193.521833ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:32.625449    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:32.625457    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.625468    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.625490    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.629085    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.629098    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.629104    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.629109    9678 round_trippers.go:580]     Audit-Id: be85d5d8-54f0-49cf-8bd5-3fb5f000e9ba
	I1107 09:12:32.629114    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.629119    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.629126    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.629132    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.629229    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641-m02","uid":"1d15109e-f0b8-4c7f-a0d6-c4e58cde1a91","resourceVersion":"858","creationTimestamp":"2022-11-07T17:10:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1107 09:12:32.629407    9678 pod_ready.go:92] pod "kube-proxy-hxglr" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:32.629414    9678 pod_ready.go:81] duration metric: took 398.628333ms waiting for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.629420    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.825363    9678 request.go:614] Waited for 195.900521ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-nwck5
	I1107 09:12:32.825468    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-nwck5
	I1107 09:12:32.825516    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.825529    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.825540    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.829492    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.829508    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.829516    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.829522    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.829529    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.829535    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.829543    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.829555    9678 round_trippers.go:580]     Audit-Id: 938009c6-b676-4029-9896-067702d49676
	I1107 09:12:32.829638    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nwck5","generateName":"kube-proxy-","namespace":"kube-system","uid":"017b9de2-3593-4e50-9493-7d14c0b994ce","resourceVersion":"945","creationTimestamp":"2022-11-07T17:08:26Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1107 09:12:33.027503    9678 request.go:614] Waited for 197.432804ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m03
	I1107 09:12:33.027550    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m03
	I1107 09:12:33.027558    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.027570    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.027582    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.031425    9678 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1107 09:12:33.031445    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.031459    9678 round_trippers.go:580]     Audit-Id: 4e848c86-93e8-49b0-bbb6-7c281134cf31
	I1107 09:12:33.031470    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.031491    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.031519    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.031533    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.031543    9678 round_trippers.go:580]     Content-Length: 210
	I1107 09:12:33.031550    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.031571    9678 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-090641-m03\" not found","reason":"NotFound","details":{"name":"multinode-090641-m03","kind":"nodes"},"code":404}
	I1107 09:12:33.031642    9678 pod_ready.go:97] node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:33.031652    9678 pod_ready.go:81] duration metric: took 402.216594ms waiting for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	E1107 09:12:33.031660    9678 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:33.031668    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.227338    9678 request.go:614] Waited for 195.621561ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:33.227479    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:33.227489    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.227502    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.227513    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.231189    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:33.231206    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.231217    9678 round_trippers.go:580]     Audit-Id: 8ecd7478-24b3-4e1e-8c86-ec638699ea9c
	I1107 09:12:33.231228    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.231238    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.231252    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.231268    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.231283    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.231422    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rqnqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d","resourceVersion":"1029","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1107 09:12:33.425418    9678 request.go:614] Waited for 193.613243ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.425463    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.425472    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.425481    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.425491    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.428237    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:33.428247    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.428252    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.428257    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.428262    9678 round_trippers.go:580]     Audit-Id: 250d2cf1-e02b-43de-8019-2835fc6bbf00
	I1107 09:12:33.428267    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.428272    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.428276    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.428324    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:33.428523    9678 pod_ready.go:92] pod "kube-proxy-rqnqb" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:33.428530    9678 pod_ready.go:81] duration metric: took 396.846496ms waiting for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.428535    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.626693    9678 request.go:614] Waited for 198.100809ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:33.626829    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:33.626845    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.626859    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.626872    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.630954    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:33.630971    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.630982    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.630991    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.631000    9678 round_trippers.go:580]     Audit-Id: 106278b3-bbd0-4a43-a8c2-d390b51f92b6
	I1107 09:12:33.631008    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.631014    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.631021    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.631162    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-090641","namespace":"kube-system","uid":"76a48883-135f-49f5-831d-d0182408b2ca","resourceVersion":"1041","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.mirror":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.seen":"2022-11-07T17:07:07.141854549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1107 09:12:33.825636    9678 request.go:614] Waited for 194.128254ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.825791    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.825802    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.825814    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.825826    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.829974    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:33.829986    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.829992    9678 round_trippers.go:580]     Audit-Id: 7af5ba8c-3e7e-44ea-82f6-ecddeb79acad
	I1107 09:12:33.829997    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.830001    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.830006    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.830011    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.830015    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.830068    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:33.830268    9678 pod_ready.go:92] pod "kube-scheduler-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:33.830274    9678 pod_ready.go:81] duration metric: took 401.723735ms waiting for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.830281    9678 pod_ready.go:38] duration metric: took 3.40181826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:33.830294    9678 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:12:33.830354    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:33.839672    9678 command_runner.go:130] > 1791
	I1107 09:12:33.840515    9678 api_server.go:71] duration metric: took 3.598766104s to wait for apiserver process to appear ...
	I1107 09:12:33.840524    9678 api_server.go:87] waiting for apiserver healthz status ...
	I1107 09:12:33.840535    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:33.845950    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 200:
	ok
	I1107 09:12:33.845979    9678 round_trippers.go:463] GET https://127.0.0.1:51429/version
	I1107 09:12:33.845984    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.845990    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.845996    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.846957    9678 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1107 09:12:33.846967    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.846972    9678 round_trippers.go:580]     Audit-Id: df77bf83-4269-4357-9553-7a5b9f6148e4
	I1107 09:12:33.846978    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.846983    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.846988    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.846993    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.846998    9678 round_trippers.go:580]     Content-Length: 263
	I1107 09:12:33.847002    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.847012    9678 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 09:12:33.847042    9678 api_server.go:140] control plane version: v1.25.3
	I1107 09:12:33.847048    9678 api_server.go:130] duration metric: took 6.520078ms to wait for apiserver health ...
	I1107 09:12:33.847055    9678 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 09:12:34.025926    9678 request.go:614] Waited for 178.70746ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.025988    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.026000    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.026013    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.026024    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.031391    9678 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 09:12:34.031403    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.031409    9678 round_trippers.go:580]     Audit-Id: fb0ff7f8-a2c3-4df1-89cf-d50d8fadb2ee
	I1107 09:12:34.031434    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.031443    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.031448    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.031453    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.031458    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.032825    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1107 09:12:34.034785    9678 system_pods.go:59] 12 kube-system pods found
	I1107 09:12:34.034794    9678 system_pods.go:61] "coredns-565d847f94-54csh" [6e280b18-683c-4888-93db-3756e665d1f6] Running
	I1107 09:12:34.034798    9678 system_pods.go:61] "etcd-multinode-090641" [b5cec8d5-21cb-4a1e-a05a-92b541499e1c] Running
	I1107 09:12:34.034802    9678 system_pods.go:61] "kindnet-5d6kd" [be85ead0-4248-490e-a8fc-2a92f78801f3] Running
	I1107 09:12:34.034806    9678 system_pods.go:61] "kindnet-mgtrp" [e8094b6c-54ad-4f87-aaf3-88dc5155b128] Running
	I1107 09:12:34.034812    9678 system_pods.go:61] "kindnet-nx5lb" [3021a22e-37f1-40d1-9205-1abfb03e58a9] Running
	I1107 09:12:34.034817    9678 system_pods.go:61] "kube-apiserver-multinode-090641" [3ae5af06-6458-4954-a296-a43002732bf4] Running
	I1107 09:12:34.034821    9678 system_pods.go:61] "kube-controller-manager-multinode-090641" [1c2584e6-6b2e-4c67-aea4-7c5568355345] Running
	I1107 09:12:34.034824    9678 system_pods.go:61] "kube-proxy-hxglr" [64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff] Running
	I1107 09:12:34.034828    9678 system_pods.go:61] "kube-proxy-nwck5" [017b9de2-3593-4e50-9493-7d14c0b994ce] Running
	I1107 09:12:34.034832    9678 system_pods.go:61] "kube-proxy-rqnqb" [f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d] Running
	I1107 09:12:34.034835    9678 system_pods.go:61] "kube-scheduler-multinode-090641" [76a48883-135f-49f5-831d-d0182408b2ca] Running
	I1107 09:12:34.034839    9678 system_pods.go:61] "storage-provisioner" [29595449-7701-47e2-af62-0638177bb673] Running
	I1107 09:12:34.034843    9678 system_pods.go:74] duration metric: took 187.779435ms to wait for pod list to return data ...
	I1107 09:12:34.034848    9678 default_sa.go:34] waiting for default service account to be created ...
	I1107 09:12:34.225990    9678 request.go:614] Waited for 191.082557ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/default/serviceaccounts
	I1107 09:12:34.226189    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/default/serviceaccounts
	I1107 09:12:34.226202    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.226214    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.226224    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.230034    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:34.230047    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.230055    9678 round_trippers.go:580]     Content-Length: 262
	I1107 09:12:34.230061    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.230069    9678 round_trippers.go:580]     Audit-Id: 265e2aee-7e0b-445d-852a-fed21109d4b7
	I1107 09:12:34.230075    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.230082    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.230094    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.230101    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.230115    9678 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f1ae7bed-fcb2-4b82-ac97-3026f7395742","resourceVersion":"296","creationTimestamp":"2022-11-07T17:07:19Z"}}]}
	I1107 09:12:34.230276    9678 default_sa.go:45] found service account: "default"
	I1107 09:12:34.230291    9678 default_sa.go:55] duration metric: took 195.428822ms for default service account to be created ...
	I1107 09:12:34.230298    9678 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 09:12:34.427445    9678 request.go:614] Waited for 197.078914ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.427588    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.427601    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.427615    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.427639    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.432820    9678 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 09:12:34.432848    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.432859    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.432866    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.432872    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.432878    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.432886    9678 round_trippers.go:580]     Audit-Id: ac13ca40-7114-4a24-920b-13a4b364bfec
	I1107 09:12:34.432893    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.434610    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1107 09:12:34.437114    9678 system_pods.go:86] 12 kube-system pods found
	I1107 09:12:34.437139    9678 system_pods.go:89] "coredns-565d847f94-54csh" [6e280b18-683c-4888-93db-3756e665d1f6] Running
	I1107 09:12:34.437144    9678 system_pods.go:89] "etcd-multinode-090641" [b5cec8d5-21cb-4a1e-a05a-92b541499e1c] Running
	I1107 09:12:34.437148    9678 system_pods.go:89] "kindnet-5d6kd" [be85ead0-4248-490e-a8fc-2a92f78801f3] Running
	I1107 09:12:34.437152    9678 system_pods.go:89] "kindnet-mgtrp" [e8094b6c-54ad-4f87-aaf3-88dc5155b128] Running
	I1107 09:12:34.437156    9678 system_pods.go:89] "kindnet-nx5lb" [3021a22e-37f1-40d1-9205-1abfb03e58a9] Running
	I1107 09:12:34.437159    9678 system_pods.go:89] "kube-apiserver-multinode-090641" [3ae5af06-6458-4954-a296-a43002732bf4] Running
	I1107 09:12:34.437167    9678 system_pods.go:89] "kube-controller-manager-multinode-090641" [1c2584e6-6b2e-4c67-aea4-7c5568355345] Running
	I1107 09:12:34.437171    9678 system_pods.go:89] "kube-proxy-hxglr" [64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff] Running
	I1107 09:12:34.437175    9678 system_pods.go:89] "kube-proxy-nwck5" [017b9de2-3593-4e50-9493-7d14c0b994ce] Running
	I1107 09:12:34.437178    9678 system_pods.go:89] "kube-proxy-rqnqb" [f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d] Running
	I1107 09:12:34.437183    9678 system_pods.go:89] "kube-scheduler-multinode-090641" [76a48883-135f-49f5-831d-d0182408b2ca] Running
	I1107 09:12:34.437187    9678 system_pods.go:89] "storage-provisioner" [29595449-7701-47e2-af62-0638177bb673] Running
	I1107 09:12:34.437192    9678 system_pods.go:126] duration metric: took 206.884328ms to wait for k8s-apps to be running ...
	I1107 09:12:34.437196    9678 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 09:12:34.437261    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:12:34.446899    9678 system_svc.go:56] duration metric: took 9.697955ms WaitForService to wait for kubelet.
	I1107 09:12:34.446911    9678 kubeadm.go:573] duration metric: took 4.205149164s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 09:12:34.446926    9678 node_conditions.go:102] verifying NodePressure condition ...
	I1107 09:12:34.627437    9678 request.go:614] Waited for 180.434189ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes
	I1107 09:12:34.627554    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes
	I1107 09:12:34.627565    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.627576    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.627587    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.632102    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:34.632114    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.632119    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.632124    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.632128    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.632133    9678 round_trippers.go:580]     Audit-Id: 304c67c3-00ed-47aa-892c-5149b3b193f2
	I1107 09:12:34.632138    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.632143    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.632225    9678 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 10903 chars]
	I1107 09:12:34.632544    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:34.632554    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:34.632561    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:34.632564    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:34.632568    9678 node_conditions.go:105] duration metric: took 185.633793ms to run NodePressure ...
	I1107 09:12:34.632575    9678 start.go:217] waiting for startup goroutines ...
	I1107 09:12:34.633275    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:34.633342    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:34.655687    9678 out.go:177] * Starting worker node multinode-090641-m02 in cluster multinode-090641
	I1107 09:12:34.677043    9678 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:12:34.698340    9678 out.go:177] * Pulling base image ...
	I1107 09:12:34.720473    9678 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:12:34.720483    9678 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:12:34.720514    9678 cache.go:57] Caching tarball of preloaded images
	I1107 09:12:34.720712    9678 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:12:34.720734    9678 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 09:12:34.721552    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:34.777853    9678 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:12:34.777893    9678 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:12:34.777906    9678 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:12:34.777943    9678 start.go:364] acquiring machines lock for multinode-090641-m02: {Name:mk293de5de179041e4a4997c06a64a8e82b6c39e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:12:34.778031    9678 start.go:368] acquired machines lock for "multinode-090641-m02" in 75.987µs
	I1107 09:12:34.778057    9678 start.go:96] Skipping create...Using existing machine configuration
	I1107 09:12:34.778062    9678 fix.go:55] fixHost starting: m02
	I1107 09:12:34.778350    9678 cli_runner.go:164] Run: docker container inspect multinode-090641-m02 --format={{.State.Status}}
	I1107 09:12:34.834939    9678 fix.go:103] recreateIfNeeded on multinode-090641-m02: state=Stopped err=<nil>
	W1107 09:12:34.834961    9678 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 09:12:34.856856    9678 out.go:177] * Restarting existing docker container for "multinode-090641-m02" ...
	I1107 09:12:34.899732    9678 cli_runner.go:164] Run: docker start multinode-090641-m02
	I1107 09:12:35.240134    9678 cli_runner.go:164] Run: docker container inspect multinode-090641-m02 --format={{.State.Status}}
	I1107 09:12:35.300533    9678 kic.go:415] container "multinode-090641-m02" state is running.
	I1107 09:12:35.301102    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641-m02
	I1107 09:12:35.363745    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:35.364248    9678 machine.go:88] provisioning docker machine ...
	I1107 09:12:35.364267    9678 ubuntu.go:169] provisioning hostname "multinode-090641-m02"
	I1107 09:12:35.364375    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:35.434458    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:35.434629    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:35.434639    9678 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-090641-m02 && echo "multinode-090641-m02" | sudo tee /etc/hostname
	I1107 09:12:35.588762    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-090641-m02
	
	I1107 09:12:35.588857    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:35.652316    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:35.652472    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:35.652485    9678 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-090641-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-090641-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-090641-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:12:35.773766    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:35.773786    9678 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:12:35.773814    9678 ubuntu.go:177] setting up certificates
	I1107 09:12:35.773823    9678 provision.go:83] configureAuth start
	I1107 09:12:35.773922    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641-m02
	I1107 09:12:35.833228    9678 provision.go:138] copyHostCerts
	I1107 09:12:35.833279    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:35.833338    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:12:35.833344    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:35.833450    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:12:35.833623    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:35.833663    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:12:35.833667    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:35.833792    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:12:35.833933    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:35.833969    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:12:35.833974    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:35.834045    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:12:35.834172    9678 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.multinode-090641-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-090641-m02]
	I1107 09:12:36.017539    9678 provision.go:172] copyRemoteCerts
	I1107 09:12:36.017604    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:12:36.017669    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.079063    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:36.164012    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 09:12:36.164094    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:12:36.181786    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 09:12:36.181882    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1107 09:12:36.198627    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 09:12:36.198718    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 09:12:36.215475    9678 provision.go:86] duration metric: configureAuth took 441.630066ms
	I1107 09:12:36.215505    9678 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:12:36.215689    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:36.215776    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.273592    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:36.273767    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:36.273778    9678 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:12:36.389713    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:12:36.389731    9678 ubuntu.go:71] root file system type: overlay
	I1107 09:12:36.389892    9678 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:12:36.389978    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.448754    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:36.448913    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:36.448961    9678 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:12:36.574590    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:12:36.574715    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.632733    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:36.632880    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:36.632895    9678 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:12:36.754910    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:36.754927    9678 machine.go:91] provisioned docker machine in 1.390634499s
	I1107 09:12:36.754934    9678 start.go:300] post-start starting for "multinode-090641-m02" (driver="docker")
	I1107 09:12:36.754940    9678 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:12:36.755014    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:12:36.755077    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.812191    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:36.898167    9678 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:12:36.901449    9678 command_runner.go:130] > NAME="Ubuntu"
	I1107 09:12:36.901460    9678 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1107 09:12:36.901464    9678 command_runner.go:130] > ID=ubuntu
	I1107 09:12:36.901470    9678 command_runner.go:130] > ID_LIKE=debian
	I1107 09:12:36.901477    9678 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1107 09:12:36.901482    9678 command_runner.go:130] > VERSION_ID="20.04"
	I1107 09:12:36.901489    9678 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 09:12:36.901497    9678 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 09:12:36.901502    9678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 09:12:36.901518    9678 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 09:12:36.901523    9678 command_runner.go:130] > VERSION_CODENAME=focal
	I1107 09:12:36.901529    9678 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1107 09:12:36.901579    9678 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:12:36.901589    9678 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:12:36.901599    9678 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:12:36.901604    9678 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:12:36.901610    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:12:36.901698    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:12:36.901860    9678 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:12:36.901868    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /etc/ssl/certs/32672.pem
	I1107 09:12:36.902053    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:12:36.910520    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:36.928012    9678 start.go:303] post-start completed in 173.063441ms
	I1107 09:12:36.928092    9678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:12:36.928153    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.986235    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:37.069251    9678 command_runner.go:130] > 6%
	I1107 09:12:37.069341    9678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:12:37.073871    9678 command_runner.go:130] > 92G
	I1107 09:12:37.094141    9678 fix.go:57] fixHost completed within 2.316013742s
	I1107 09:12:37.094163    9678 start.go:83] releasing machines lock for "multinode-090641-m02", held for 2.316061544s
	I1107 09:12:37.094352    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641-m02
	I1107 09:12:37.174178    9678 out.go:177] * Found network options:
	I1107 09:12:37.195455    9678 out.go:177]   - NO_PROXY=192.168.58.2
	W1107 09:12:37.217090    9678 proxy.go:119] fail to check proxy env: Error ip not in block
	W1107 09:12:37.217155    9678 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 09:12:37.217384    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 09:12:37.217393    9678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 09:12:37.217520    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:37.217541    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:37.279387    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:37.279533    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:37.418029    9678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 09:12:37.419951    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1107 09:12:37.432516    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:37.506573    9678 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 09:12:37.605540    9678 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:12:37.616361    9678 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1107 09:12:37.616393    9678 command_runner.go:130] > [Unit]
	I1107 09:12:37.616403    9678 command_runner.go:130] > Description=Docker Application Container Engine
	I1107 09:12:37.616409    9678 command_runner.go:130] > Documentation=https://docs.docker.com
	I1107 09:12:37.616413    9678 command_runner.go:130] > BindsTo=containerd.service
	I1107 09:12:37.616418    9678 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1107 09:12:37.616423    9678 command_runner.go:130] > Wants=network-online.target
	I1107 09:12:37.616427    9678 command_runner.go:130] > Requires=docker.socket
	I1107 09:12:37.616432    9678 command_runner.go:130] > StartLimitBurst=3
	I1107 09:12:37.616436    9678 command_runner.go:130] > StartLimitIntervalSec=60
	I1107 09:12:37.616440    9678 command_runner.go:130] > [Service]
	I1107 09:12:37.616443    9678 command_runner.go:130] > Type=notify
	I1107 09:12:37.616447    9678 command_runner.go:130] > Restart=on-failure
	I1107 09:12:37.616451    9678 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1107 09:12:37.616456    9678 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1107 09:12:37.616463    9678 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1107 09:12:37.616468    9678 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1107 09:12:37.616474    9678 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1107 09:12:37.616481    9678 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1107 09:12:37.616486    9678 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1107 09:12:37.616496    9678 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1107 09:12:37.616506    9678 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1107 09:12:37.616515    9678 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1107 09:12:37.616521    9678 command_runner.go:130] > ExecStart=
	I1107 09:12:37.616532    9678 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1107 09:12:37.616536    9678 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1107 09:12:37.616544    9678 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1107 09:12:37.616549    9678 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1107 09:12:37.616553    9678 command_runner.go:130] > LimitNOFILE=infinity
	I1107 09:12:37.616556    9678 command_runner.go:130] > LimitNPROC=infinity
	I1107 09:12:37.616560    9678 command_runner.go:130] > LimitCORE=infinity
	I1107 09:12:37.616565    9678 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1107 09:12:37.616569    9678 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1107 09:12:37.616576    9678 command_runner.go:130] > TasksMax=infinity
	I1107 09:12:37.616579    9678 command_runner.go:130] > TimeoutStartSec=0
	I1107 09:12:37.616584    9678 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1107 09:12:37.616588    9678 command_runner.go:130] > Delegate=yes
	I1107 09:12:37.616598    9678 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1107 09:12:37.616602    9678 command_runner.go:130] > KillMode=process
	I1107 09:12:37.616605    9678 command_runner.go:130] > [Install]
	I1107 09:12:37.616609    9678 command_runner.go:130] > WantedBy=multi-user.target
	I1107 09:12:37.617203    9678 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:12:37.617270    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:12:37.626466    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:12:37.638117    9678 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:37.638129    9678 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:37.639036    9678 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:12:37.712086    9678 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:12:37.782059    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:37.850049    9678 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:12:38.082578    9678 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 09:12:38.147577    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:38.216906    9678 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 09:12:38.226468    9678 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 09:12:38.226555    9678 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 09:12:38.230230    9678 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1107 09:12:38.230240    9678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 09:12:38.230249    9678 command_runner.go:130] > Device: 100036h/1048630d	Inode: 130         Links: 1
	I1107 09:12:38.230255    9678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1107 09:12:38.230262    9678 command_runner.go:130] > Access: 2022-11-07 17:12:38.061131969 +0000
	I1107 09:12:38.230266    9678 command_runner.go:130] > Modify: 2022-11-07 17:12:37.527132006 +0000
	I1107 09:12:38.230271    9678 command_runner.go:130] > Change: 2022-11-07 17:12:37.542132005 +0000
	I1107 09:12:38.230275    9678 command_runner.go:130] >  Birth: -
	I1107 09:12:38.230286    9678 start.go:472] Will wait 60s for crictl version
	I1107 09:12:38.230335    9678 ssh_runner.go:195] Run: sudo crictl version
	I1107 09:12:38.256676    9678 command_runner.go:130] > Version:  0.1.0
	I1107 09:12:38.256687    9678 command_runner.go:130] > RuntimeName:  docker
	I1107 09:12:38.256697    9678 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1107 09:12:38.256710    9678 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1107 09:12:38.258513    9678 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 09:12:38.258605    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:38.284676    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:38.286659    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:38.312548    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:38.359207    9678 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 09:12:38.381005    9678 out.go:177]   - env NO_PROXY=192.168.58.2
	I1107 09:12:38.402202    9678 cli_runner.go:164] Run: docker exec -t multinode-090641-m02 dig +short host.docker.internal
	I1107 09:12:38.521784    9678 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:12:38.521888    9678 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:12:38.526217    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:38.535736    9678 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641 for IP: 192.168.58.3
	I1107 09:12:38.535853    9678 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:12:38.535907    9678 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:12:38.535915    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 09:12:38.535948    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 09:12:38.535973    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 09:12:38.535998    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 09:12:38.536086    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:12:38.536138    9678 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:12:38.536153    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:12:38.536193    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:12:38.536232    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:12:38.536264    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:12:38.536342    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:38.536383    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.536413    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.536434    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem -> /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.536767    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:12:38.554401    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:12:38.571586    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:12:38.589897    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:12:38.607380    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:12:38.623922    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:12:38.641539    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:12:38.659050    9678 ssh_runner.go:195] Run: openssl version
	I1107 09:12:38.664357    9678 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1107 09:12:38.664777    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:12:38.672962    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.676851    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.676940    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.677002    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.682120    9678 command_runner.go:130] > b5213941
	I1107 09:12:38.682452    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:12:38.690013    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:12:38.698443    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.702204    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.702334    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.702384    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.707394    9678 command_runner.go:130] > 51391683
	I1107 09:12:38.707749    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:12:38.715087    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:12:38.722738    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.726303    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.726395    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.726443    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.731207    9678 command_runner.go:130] > 3ec20f2e
	I1107 09:12:38.731456    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:12:38.738803    9678 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:12:38.805762    9678 command_runner.go:130] > systemd
	I1107 09:12:38.808063    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:38.808074    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:38.808085    9678 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:12:38.808105    9678 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-090641 NodeName:multinode-090641-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:12:38.808187    9678 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-090641-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:12:38.808258    9678 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-090641-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:12:38.808332    9678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 09:12:38.815160    9678 command_runner.go:130] > kubeadm
	I1107 09:12:38.815168    9678 command_runner.go:130] > kubectl
	I1107 09:12:38.815171    9678 command_runner.go:130] > kubelet
	I1107 09:12:38.815923    9678 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:12:38.815985    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1107 09:12:38.822735    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I1107 09:12:38.835647    9678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:12:38.848281    9678 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:12:38.851948    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:38.861329    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:38.861511    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:38.861518    9678 start.go:286] JoinCluster: &{Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:38.861608    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1107 09:12:38.861673    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:38.919647    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:39.053317    9678 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f 
	I1107 09:12:39.053349    9678 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:39.053369    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:39.053609    9678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-090641-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1107 09:12:39.053674    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:39.111894    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:39.237803    9678 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1107 09:12:39.268588    9678 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-5d6kd, kube-system/kube-proxy-hxglr
	I1107 09:12:42.278699    9678 command_runner.go:130] > node/multinode-090641-m02 cordoned
	I1107 09:12:42.278716    9678 command_runner.go:130] > pod "busybox-65db55d5d6-gvc9j" has DeletionTimestamp older than 1 seconds, skipping
	I1107 09:12:42.278721    9678 command_runner.go:130] > node/multinode-090641-m02 drained
	I1107 09:12:42.278740    9678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-090641-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.225031924s)
	I1107 09:12:42.278750    9678 node.go:109] successfully drained node "m02"
	I1107 09:12:42.279107    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:42.279321    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:42.279569    9678 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1107 09:12:42.279600    9678 round_trippers.go:463] DELETE https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:42.279605    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:42.279611    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:42.279617    9678 round_trippers.go:473]     Content-Type: application/json
	I1107 09:12:42.279622    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:42.283069    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:42.283080    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:42.283086    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:42.283091    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:42.283099    9678 round_trippers.go:580]     Content-Length: 171
	I1107 09:12:42.283104    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:42 GMT
	I1107 09:12:42.283112    9678 round_trippers.go:580]     Audit-Id: 4b919fdd-8370-45bb-8e46-98d354ba8e74
	I1107 09:12:42.283117    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:42.283122    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:42.283136    9678 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-090641-m02","kind":"nodes","uid":"1d15109e-f0b8-4c7f-a0d6-c4e58cde1a91"}}
	I1107 09:12:42.283163    9678 node.go:125] successfully deleted node "m02"
	I1107 09:12:42.283170    9678 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:42.283182    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:42.283192    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:12:42.321330    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:12:42.432078    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:12:42.432093    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:12:42.452544    9678 command_runner.go:130] ! W1107 17:12:42.332044    1095 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:42.452557    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:12:42.452570    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:12:42.452578    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:12:42.452583    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:12:42.452590    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:12:42.452605    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:12:42.452611    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:12:42.452650    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:42.332044    1095 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:42.452663    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:12:42.452671    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:12:42.489820    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:12:42.489840    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:42.489860    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:42.489880    9678 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:42.332044    1095 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.537148    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:53.537312    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:12:53.576193    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:12:53.675405    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:12:53.675423    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:12:53.693584    9678 command_runner.go:130] ! W1107 17:12:53.575913    1771 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:53.693605    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:12:53.693614    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:12:53.693620    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:12:53.693627    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:12:53.693633    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:12:53.693642    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:12:53.693647    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:12:53.693676    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:53.575913    1771 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.693685    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:12:53.693692    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:12:53.731248    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:12:53.731269    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.731291    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.731302    9678 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:53.575913    1771 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.341284    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:13:15.341342    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:13:15.378198    9678 command_runner.go:130] ! W1107 17:13:15.393134    2010 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:13:15.378214    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:13:15.400742    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:13:15.405187    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:13:15.465066    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:13:15.465080    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:13:15.491421    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:13:15.491444    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.494621    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:13:15.494636    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:13:15.494643    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1107 09:13:15.494673    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:15.393134    2010 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.494682    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:13:15.494691    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:13:15.530806    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:13:15.530821    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.530835    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.530846    9678 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:15.393134    2010 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.734157    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:13:41.734233    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:13:41.767927    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:13:41.869550    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:13:41.869564    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:13:41.888428    9678 command_runner.go:130] ! W1107 17:13:41.780315    2274 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:13:41.888441    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:13:41.888452    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:13:41.888457    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:13:41.888462    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:13:41.888469    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:13:41.888478    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:13:41.888484    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:13:41.888510    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:41.780315    2274 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.888518    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:13:41.888526    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:13:41.923298    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:13:41.923314    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.926330    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.926346    9678 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:41.780315    2274 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.577189    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:14:13.577283    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:14:13.614416    9678 command_runner.go:130] ! W1107 17:14:13.628078    2604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:14:13.614527    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:14:13.637583    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:14:13.642787    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:14:13.706851    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:14:13.706864    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:14:13.732790    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:14:13.732803    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.735842    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:14:13.735853    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:14:13.735860    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1107 09:14:13.735888    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:14:13.628078    2604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.735906    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:14:13.735922    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:14:13.770217    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:14:13.770230    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.772704    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.772723    9678 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:14:13.628078    2604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.585276    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:15:00.585339    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:15:00.624218    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:15:00.727643    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:15:00.727680    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:15:00.744682    9678 command_runner.go:130] ! W1107 17:15:00.628744    3040 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:15:00.744697    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:15:00.744708    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:15:00.744713    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:15:00.744718    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:15:00.744737    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:15:00.744750    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:15:00.744756    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:15:00.744793    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:15:00.628744    3040 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.744801    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:15:00.744810    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:15:00.786274    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:15:00.786289    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.786308    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.786324    9678 start.go:288] JoinCluster complete in 2m21.921226011s
	I1107 09:15:00.808159    9678 out.go:177] 
	W1107 09:15:00.829126    9678 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:15:00.628744    3040 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:15:00.628744    3040 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:15:00.829157    9678 out.go:239] * 
	* 
	W1107 09:15:00.829806    9678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:15:00.892201    9678 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-090641 --wait=true -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-090641
helpers_test.go:235: (dbg) docker inspect multinode-090641:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6347e05c8c132ccbebdee256949fdef5c1ff700fb76625a55438eb4d54c50392",
	        "Created": "2022-11-07T17:06:47.741264794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102794,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:12:03.291058744Z",
	            "FinishedAt": "2022-11-07T17:11:49.236728018Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/6347e05c8c132ccbebdee256949fdef5c1ff700fb76625a55438eb4d54c50392/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6347e05c8c132ccbebdee256949fdef5c1ff700fb76625a55438eb4d54c50392/hostname",
	        "HostsPath": "/var/lib/docker/containers/6347e05c8c132ccbebdee256949fdef5c1ff700fb76625a55438eb4d54c50392/hosts",
	        "LogPath": "/var/lib/docker/containers/6347e05c8c132ccbebdee256949fdef5c1ff700fb76625a55438eb4d54c50392/6347e05c8c132ccbebdee256949fdef5c1ff700fb76625a55438eb4d54c50392-json.log",
	        "Name": "/multinode-090641",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-090641:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-090641",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4e479b9ef9eb27811817465171166e61055f23ffffc5a57868724955bb5d9e0e-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e479b9ef9eb27811817465171166e61055f23ffffc5a57868724955bb5d9e0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e479b9ef9eb27811817465171166e61055f23ffffc5a57868724955bb5d9e0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e479b9ef9eb27811817465171166e61055f23ffffc5a57868724955bb5d9e0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-090641",
	                "Source": "/var/lib/docker/volumes/multinode-090641/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-090641",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-090641",
	                "name.minikube.sigs.k8s.io": "multinode-090641",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "04fe19ea2aa9cd6acd793a13abf255f64a5d918e3931d817ebf8523397dc6b5a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51425"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/04fe19ea2aa9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-090641": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6347e05c8c13",
	                        "multinode-090641"
	                    ],
	                    "NetworkID": "0b670b6c25cba3e52a3af8f55d3e7a9a9f85bb0e58cfcca9b73d6190d5b2420b",
	                    "EndpointID": "cce09dbcfcaaeded9b8af1e447b5152aecbe02ffdf75191616090a2a5c5105c5",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-090641 -n multinode-090641
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-090641 logs -n 25: (3.083991798s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-090641 cp multinode-090641-m02:/home/docker/cp-test.txt                                                           | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641:/home/docker/cp-test_multinode-090641-m02_multinode-090641.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n                                                                                                     | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n multinode-090641 sudo cat                                                                           | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | /home/docker/cp-test_multinode-090641-m02_multinode-090641.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-090641 cp multinode-090641-m02:/home/docker/cp-test.txt                                                           | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m03:/home/docker/cp-test_multinode-090641-m02_multinode-090641-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n                                                                                                     | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n multinode-090641-m03 sudo cat                                                                       | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | /home/docker/cp-test_multinode-090641-m02_multinode-090641-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-090641 cp testdata/cp-test.txt                                                                                    | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n                                                                                                     | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-090641 cp multinode-090641-m03:/home/docker/cp-test.txt                                                           | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile2112461042/001/cp-test_multinode-090641-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n                                                                                                     | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-090641 cp multinode-090641-m03:/home/docker/cp-test.txt                                                           | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641:/home/docker/cp-test_multinode-090641-m03_multinode-090641.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n                                                                                                     | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n multinode-090641 sudo cat                                                                           | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | /home/docker/cp-test_multinode-090641-m03_multinode-090641.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-090641 cp multinode-090641-m03:/home/docker/cp-test.txt                                                           | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m02:/home/docker/cp-test_multinode-090641-m03_multinode-090641-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n                                                                                                     | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | multinode-090641-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-090641 ssh -n multinode-090641-m02 sudo cat                                                                       | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:08 PST |
	|         | /home/docker/cp-test_multinode-090641-m03_multinode-090641-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-090641 node stop m03                                                                                              | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:08 PST | 07 Nov 22 09:09 PST |
	| node    | multinode-090641 node start                                                                                                 | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:09 PST | 07 Nov 22 09:09 PST |
	|         | m03 --alsologtostderr                                                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-090641                                                                                                    | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:09 PST |                     |
	| stop    | -p multinode-090641                                                                                                         | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:09 PST | 07 Nov 22 09:10 PST |
	| start   | -p multinode-090641                                                                                                         | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:10 PST | 07 Nov 22 09:11 PST |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-090641                                                                                                    | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:11 PST |                     |
	| node    | multinode-090641 node delete                                                                                                | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:11 PST | 07 Nov 22 09:11 PST |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-090641 stop                                                                                                       | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:11 PST | 07 Nov 22 09:12 PST |
	| start   | -p multinode-090641                                                                                                         | multinode-090641 | jenkins | v1.28.0 | 07 Nov 22 09:12 PST |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=docker                                                                                                             |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 09:12:02
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 09:12:02.068696    9678 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:12:02.068876    9678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:12:02.068882    9678 out.go:309] Setting ErrFile to fd 2...
	I1107 09:12:02.068886    9678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:12:02.068997    9678 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:12:02.069491    9678 out.go:303] Setting JSON to false
	I1107 09:12:02.088055    9678 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2497,"bootTime":1667838625,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 09:12:02.088163    9678 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 09:12:02.110109    9678 out.go:177] * [multinode-090641] minikube v1.28.0 on Darwin 13.0
	I1107 09:12:02.152978    9678 notify.go:220] Checking for updates...
	I1107 09:12:02.174669    9678 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 09:12:02.195649    9678 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:02.217060    9678 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 09:12:02.238693    9678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 09:12:02.259951    9678 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 09:12:02.282515    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:02.283059    9678 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 09:12:02.343915    9678 docker.go:137] docker version: linux-20.10.20
	I1107 09:12:02.344059    9678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:12:02.486010    9678 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 17:12:02.400066327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:12:02.529501    9678 out.go:177] * Using the docker driver based on existing profile
	I1107 09:12:02.550766    9678 start.go:282] selected driver: docker
	I1107 09:12:02.550802    9678 start.go:808] validating driver "docker" against &{Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:02.551015    9678 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 09:12:02.551277    9678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:12:02.692763    9678 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 17:12:02.608782854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:12:02.695167    9678 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 09:12:02.695194    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:02.695203    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:02.695218    9678 start_flags.go:317] config:
	{Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:02.738804    9678 out.go:177] * Starting control plane node multinode-090641 in cluster multinode-090641
	I1107 09:12:02.760104    9678 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:12:02.782012    9678 out.go:177] * Pulling base image ...
	I1107 09:12:02.803932    9678 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:12:02.803967    9678 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:12:02.804029    9678 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 09:12:02.804048    9678 cache.go:57] Caching tarball of preloaded images
	I1107 09:12:02.804279    9678 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:12:02.804301    9678 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 09:12:02.805280    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:02.859827    9678 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:12:02.859842    9678 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:12:02.859851    9678 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:12:02.859890    9678 start.go:364] acquiring machines lock for multinode-090641: {Name:mk3bc128ea070c03d4d369f5843a2d85d99f9678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:12:02.859978    9678 start.go:368] acquired machines lock for "multinode-090641" in 68.063µs
	I1107 09:12:02.860001    9678 start.go:96] Skipping create...Using existing machine configuration
	I1107 09:12:02.860013    9678 fix.go:55] fixHost starting: 
	I1107 09:12:02.860255    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:02.915360    9678 fix.go:103] recreateIfNeeded on multinode-090641: state=Stopped err=<nil>
	W1107 09:12:02.915388    9678 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 09:12:02.937321    9678 out.go:177] * Restarting existing docker container for "multinode-090641" ...
	I1107 09:12:02.959280    9678 cli_runner.go:164] Run: docker start multinode-090641
	I1107 09:12:03.292828    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:03.350674    9678 kic.go:415] container "multinode-090641" state is running.
	I1107 09:12:03.351284    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641
	I1107 09:12:03.410873    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:03.411273    9678 machine.go:88] provisioning docker machine ...
	I1107 09:12:03.411294    9678 ubuntu.go:169] provisioning hostname "multinode-090641"
	I1107 09:12:03.411395    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:03.472430    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:03.472649    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:03.472666    9678 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-090641 && echo "multinode-090641" | sudo tee /etc/hostname
	I1107 09:12:03.599972    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-090641
	
	I1107 09:12:03.600096    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:03.662785    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:03.662944    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:03.662957    9678 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-090641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-090641/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-090641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:12:03.777945    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:03.777975    9678 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:12:03.777994    9678 ubuntu.go:177] setting up certificates
	I1107 09:12:03.778002    9678 provision.go:83] configureAuth start
	I1107 09:12:03.778098    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641
	I1107 09:12:03.835025    9678 provision.go:138] copyHostCerts
	I1107 09:12:03.835072    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:03.835144    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:12:03.835153    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:03.835259    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:12:03.835957    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:03.836060    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:12:03.836069    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:03.836176    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:12:03.836457    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:03.836723    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:12:03.836730    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:03.836807    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:12:03.836960    9678 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.multinode-090641 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-090641]
	I1107 09:12:04.134049    9678 provision.go:172] copyRemoteCerts
	I1107 09:12:04.134121    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:12:04.134190    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.193882    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:04.278492    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 09:12:04.278582    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:12:04.297166    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 09:12:04.297253    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1107 09:12:04.314709    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 09:12:04.314800    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 09:12:04.332187    9678 provision.go:86] duration metric: configureAuth took 554.156405ms
	I1107 09:12:04.332201    9678 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:12:04.332387    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:04.332468    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.388965    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:04.389109    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:04.389119    9678 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:12:04.506200    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:12:04.506213    9678 ubuntu.go:71] root file system type: overlay
	I1107 09:12:04.506360    9678 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:12:04.506461    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.562461    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:04.562611    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:04.562664    9678 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:12:04.690928    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:12:04.691027    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.749104    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:04.749274    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51425 <nil> <nil>}
	I1107 09:12:04.749290    9678 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:12:04.875856    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:04.875875    9678 machine.go:91] provisioned docker machine in 1.464554493s
	I1107 09:12:04.875885    9678 start.go:300] post-start starting for "multinode-090641" (driver="docker")
	I1107 09:12:04.875891    9678 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:12:04.875966    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:12:04.876027    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:04.933346    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.019881    9678 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:12:05.023180    9678 command_runner.go:130] > NAME="Ubuntu"
	I1107 09:12:05.023189    9678 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1107 09:12:05.023193    9678 command_runner.go:130] > ID=ubuntu
	I1107 09:12:05.023205    9678 command_runner.go:130] > ID_LIKE=debian
	I1107 09:12:05.023211    9678 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1107 09:12:05.023214    9678 command_runner.go:130] > VERSION_ID="20.04"
	I1107 09:12:05.023218    9678 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 09:12:05.023222    9678 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 09:12:05.023226    9678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 09:12:05.023232    9678 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 09:12:05.023236    9678 command_runner.go:130] > VERSION_CODENAME=focal
	I1107 09:12:05.023242    9678 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1107 09:12:05.023285    9678 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:12:05.023296    9678 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:12:05.023306    9678 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:12:05.023311    9678 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:12:05.023318    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:12:05.023415    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:12:05.023619    9678 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:12:05.023625    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /etc/ssl/certs/32672.pem
	I1107 09:12:05.023830    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:12:05.030786    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:05.048235    9678 start.go:303] post-start completed in 172.336296ms
	I1107 09:12:05.048314    9678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:12:05.048377    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:05.104451    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.189375    9678 command_runner.go:130] > 6%!
	(MISSING)I1107 09:12:05.189461    9678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:12:05.193616    9678 command_runner.go:130] > 92G
	I1107 09:12:05.193943    9678 fix.go:57] fixHost completed within 2.333872366s
	I1107 09:12:05.193955    9678 start.go:83] releasing machines lock for "multinode-090641", held for 2.333910993s
	I1107 09:12:05.194046    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641
	I1107 09:12:05.249832    9678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 09:12:05.249834    9678 ssh_runner.go:195] Run: systemctl --version
	I1107 09:12:05.249917    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:05.249916    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:05.308581    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.308811    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:05.392373    9678 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I1107 09:12:05.392395    9678 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I1107 09:12:05.448316    9678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 09:12:05.450329    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 09:12:05.457569    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1107 09:12:05.469721    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:05.542255    9678 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 09:12:05.622400    9678 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:12:05.631535    9678 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1107 09:12:05.631641    9678 command_runner.go:130] > [Unit]
	I1107 09:12:05.631649    9678 command_runner.go:130] > Description=Docker Application Container Engine
	I1107 09:12:05.631653    9678 command_runner.go:130] > Documentation=https://docs.docker.com
	I1107 09:12:05.631657    9678 command_runner.go:130] > BindsTo=containerd.service
	I1107 09:12:05.631662    9678 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1107 09:12:05.631666    9678 command_runner.go:130] > Wants=network-online.target
	I1107 09:12:05.631673    9678 command_runner.go:130] > Requires=docker.socket
	I1107 09:12:05.631676    9678 command_runner.go:130] > StartLimitBurst=3
	I1107 09:12:05.631680    9678 command_runner.go:130] > StartLimitIntervalSec=60
	I1107 09:12:05.631703    9678 command_runner.go:130] > [Service]
	I1107 09:12:05.631710    9678 command_runner.go:130] > Type=notify
	I1107 09:12:05.631715    9678 command_runner.go:130] > Restart=on-failure
	I1107 09:12:05.631721    9678 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1107 09:12:05.631734    9678 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1107 09:12:05.631741    9678 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1107 09:12:05.631746    9678 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1107 09:12:05.631752    9678 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1107 09:12:05.631758    9678 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1107 09:12:05.631763    9678 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1107 09:12:05.631773    9678 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1107 09:12:05.631781    9678 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1107 09:12:05.631784    9678 command_runner.go:130] > ExecStart=
	I1107 09:12:05.631796    9678 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1107 09:12:05.631802    9678 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1107 09:12:05.631815    9678 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1107 09:12:05.631820    9678 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1107 09:12:05.631828    9678 command_runner.go:130] > LimitNOFILE=infinity
	I1107 09:12:05.631835    9678 command_runner.go:130] > LimitNPROC=infinity
	I1107 09:12:05.631839    9678 command_runner.go:130] > LimitCORE=infinity
	I1107 09:12:05.631844    9678 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1107 09:12:05.631849    9678 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1107 09:12:05.631852    9678 command_runner.go:130] > TasksMax=infinity
	I1107 09:12:05.631855    9678 command_runner.go:130] > TimeoutStartSec=0
	I1107 09:12:05.631861    9678 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1107 09:12:05.631866    9678 command_runner.go:130] > Delegate=yes
	I1107 09:12:05.631872    9678 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1107 09:12:05.631876    9678 command_runner.go:130] > KillMode=process
	I1107 09:12:05.631882    9678 command_runner.go:130] > [Install]
	I1107 09:12:05.631887    9678 command_runner.go:130] > WantedBy=multi-user.target
	I1107 09:12:05.632535    9678 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:12:05.632605    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:12:05.642301    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:12:05.654247    9678 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:05.654264    9678 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:05.655354    9678 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:12:05.725219    9678 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:12:05.790467    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:05.848257    9678 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:12:06.110835    9678 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 09:12:06.180071    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:06.244694    9678 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 09:12:06.255614    9678 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 09:12:06.255698    9678 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 09:12:06.259469    9678 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1107 09:12:06.259485    9678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 09:12:06.259494    9678 command_runner.go:130] > Device: 97h/151d	Inode: 115         Links: 1
	I1107 09:12:06.259500    9678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1107 09:12:06.259506    9678 command_runner.go:130] > Access: 2022-11-07 17:12:05.558539143 +0000
	I1107 09:12:06.259510    9678 command_runner.go:130] > Modify: 2022-11-07 17:12:05.558539143 +0000
	I1107 09:12:06.259517    9678 command_runner.go:130] > Change: 2022-11-07 17:12:05.559539143 +0000
	I1107 09:12:06.259522    9678 command_runner.go:130] >  Birth: -
	I1107 09:12:06.259726    9678 start.go:472] Will wait 60s for crictl version
	I1107 09:12:06.259784    9678 ssh_runner.go:195] Run: sudo crictl version
	I1107 09:12:06.286394    9678 command_runner.go:130] > Version:  0.1.0
	I1107 09:12:06.286404    9678 command_runner.go:130] > RuntimeName:  docker
	I1107 09:12:06.286408    9678 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1107 09:12:06.286422    9678 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1107 09:12:06.288498    9678 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 09:12:06.288594    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:06.314661    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:06.317146    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:06.342943    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:06.391784    9678 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 09:12:06.392055    9678 cli_runner.go:164] Run: docker exec -t multinode-090641 dig +short host.docker.internal
	I1107 09:12:06.502873    9678 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:12:06.502993    9678 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:12:06.507390    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:06.516796    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:06.573067    9678 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:12:06.573154    9678 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:12:06.594834    9678 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1107 09:12:06.594846    9678 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 09:12:06.594851    9678 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1107 09:12:06.594856    9678 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1107 09:12:06.594861    9678 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1107 09:12:06.594864    9678 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1107 09:12:06.594869    9678 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1107 09:12:06.594875    9678 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1107 09:12:06.594880    9678 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1107 09:12:06.594884    9678 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 09:12:06.594888    9678 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1107 09:12:06.596992    9678 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1107 09:12:06.597006    9678 docker.go:543] Images already preloaded, skipping extraction
	I1107 09:12:06.597096    9678 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:12:06.618803    9678 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1107 09:12:06.618817    9678 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1107 09:12:06.618824    9678 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1107 09:12:06.618829    9678 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1107 09:12:06.618834    9678 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1107 09:12:06.618841    9678 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1107 09:12:06.618855    9678 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1107 09:12:06.618863    9678 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1107 09:12:06.618870    9678 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1107 09:12:06.618880    9678 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 09:12:06.618889    9678 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1107 09:12:06.621080    9678 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1107 09:12:06.621097    9678 cache_images.go:84] Images are preloaded, skipping loading
	I1107 09:12:06.621189    9678 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:12:06.684368    9678 command_runner.go:130] > systemd
	I1107 09:12:06.686787    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:06.686800    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:06.686816    9678 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:12:06.686837    9678 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-090641 NodeName:multinode-090641 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:12:06.686976    9678 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-090641"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:12:06.687072    9678 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-090641 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:12:06.687145    9678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 09:12:06.694056    9678 command_runner.go:130] > kubeadm
	I1107 09:12:06.694064    9678 command_runner.go:130] > kubectl
	I1107 09:12:06.694068    9678 command_runner.go:130] > kubelet
	I1107 09:12:06.694703    9678 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:12:06.694758    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 09:12:06.701988    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I1107 09:12:06.713880    9678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:12:06.726107    9678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I1107 09:12:06.738562    9678 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:12:06.742186    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:06.751571    9678 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641 for IP: 192.168.58.2
	I1107 09:12:06.751705    9678 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:12:06.751776    9678 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:12:06.751866    9678 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key
	I1107 09:12:06.751942    9678 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.key.cee25041
	I1107 09:12:06.752004    9678 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.key
	I1107 09:12:06.752013    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 09:12:06.752043    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 09:12:06.752071    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 09:12:06.752092    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 09:12:06.752113    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 09:12:06.752133    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 09:12:06.752153    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 09:12:06.752177    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 09:12:06.752283    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:12:06.752326    9678 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:12:06.752338    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:12:06.752383    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:12:06.752421    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:12:06.752456    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:12:06.752533    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:06.752567    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem -> /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.752590    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /usr/share/ca-certificates/32672.pem
	I1107 09:12:06.752611    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.753087    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 09:12:06.770020    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 09:12:06.787192    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 09:12:06.805067    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 09:12:06.821736    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:12:06.839013    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:12:06.855916    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:12:06.874527    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:12:06.891576    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:12:06.908404    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:12:06.925327    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:12:06.941704    9678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 09:12:06.954526    9678 ssh_runner.go:195] Run: openssl version
	I1107 09:12:06.959310    9678 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1107 09:12:06.959668    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:12:06.967759    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.971487    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.971596    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.971651    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:06.976611    9678 command_runner.go:130] > b5213941
	I1107 09:12:06.976951    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:12:06.984041    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:12:06.992152    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.996004    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.996148    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:06.996194    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:12:07.001104    9678 command_runner.go:130] > 51391683
	I1107 09:12:07.001463    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:12:07.008703    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:12:07.016423    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.019974    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.020000    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.020064    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:12:07.024711    9678 command_runner.go:130] > 3ec20f2e
	I1107 09:12:07.024989    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:12:07.031899    9678 kubeadm.go:396] StartCluster: {Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false porta
iner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:07.032049    9678 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:12:07.055157    9678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 09:12:07.061822    9678 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1107 09:12:07.061831    9678 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1107 09:12:07.061836    9678 command_runner.go:130] > /var/lib/minikube/etcd:
	I1107 09:12:07.061840    9678 command_runner.go:130] > member
	I1107 09:12:07.062362    9678 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 09:12:07.062374    9678 kubeadm.go:627] restartCluster start
	I1107 09:12:07.062427    9678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 09:12:07.069418    9678 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.091194    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:07.150593    9678 kubeconfig.go:135] verify returned: extract IP: "multinode-090641" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:07.150681    9678 kubeconfig.go:146] "multinode-090641" context is missing from /Users/jenkins/minikube-integration/15310-2115/kubeconfig - will repair!
	I1107 09:12:07.150938    9678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:12:07.151424    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:07.151618    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:07.151960    9678 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 09:12:07.152131    9678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 09:12:07.159928    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.159989    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.168070    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.370166    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.370344    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.381178    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.568345    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.568492    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.578934    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.770214    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.770360    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.781200    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:07.970207    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:07.970383    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:07.980898    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.170217    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.170466    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.181215    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.370258    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.370515    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.380803    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.568797    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.568910    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.580265    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.770278    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.770492    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.781323    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:08.970258    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:08.970430    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:08.981927    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.170258    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.170472    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.181873    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.370239    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.370434    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.381523    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.570271    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.570448    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.581290    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.768680    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.768909    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.779498    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:09.970354    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:09.970510    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:09.980796    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.170305    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:10.170516    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:10.181682    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.181694    9678 api_server.go:165] Checking apiserver status ...
	I1107 09:12:10.181752    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:12:10.190178    9678 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.190190    9678 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1107 09:12:10.190198    9678 kubeadm.go:1114] stopping kube-system containers ...
	I1107 09:12:10.190281    9678 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:12:10.214062    9678 command_runner.go:130] > 3bf8f38c0aaf
	I1107 09:12:10.214072    9678 command_runner.go:130] > f8d0de33debd
	I1107 09:12:10.214083    9678 command_runner.go:130] > 627de0fec15d
	I1107 09:12:10.214087    9678 command_runner.go:130] > bea8197d27a3
	I1107 09:12:10.214093    9678 command_runner.go:130] > fcff0a2c79bd
	I1107 09:12:10.214100    9678 command_runner.go:130] > 99867f7318f3
	I1107 09:12:10.214105    9678 command_runner.go:130] > e521a3a86451
	I1107 09:12:10.214109    9678 command_runner.go:130] > 2141f6b1f9b0
	I1107 09:12:10.214118    9678 command_runner.go:130] > 5d163fc295ef
	I1107 09:12:10.214122    9678 command_runner.go:130] > 6532ace61a77
	I1107 09:12:10.214126    9678 command_runner.go:130] > 309af6aa7d07
	I1107 09:12:10.214129    9678 command_runner.go:130] > 99e10e3b23a1
	I1107 09:12:10.214133    9678 command_runner.go:130] > 1244b6d56687
	I1107 09:12:10.214147    9678 command_runner.go:130] > 35468c6f4808
	I1107 09:12:10.214153    9678 command_runner.go:130] > 44f3aabcd4fb
	I1107 09:12:10.214157    9678 command_runner.go:130] > 8c41e71be632
	I1107 09:12:10.214165    9678 command_runner.go:130] > f786068f3c1d
	I1107 09:12:10.214170    9678 command_runner.go:130] > 0c7686c51f0a
	I1107 09:12:10.214174    9678 command_runner.go:130] > 23e28a639e24
	I1107 09:12:10.214179    9678 command_runner.go:130] > 24731ab856d5
	I1107 09:12:10.214182    9678 command_runner.go:130] > 08c54785a74e
	I1107 09:12:10.214187    9678 command_runner.go:130] > bce4e7cd7c5d
	I1107 09:12:10.214190    9678 command_runner.go:130] > 729a721b15ce
	I1107 09:12:10.214194    9678 command_runner.go:130] > a9009c3f6cd2
	I1107 09:12:10.214210    9678 command_runner.go:130] > 30cc23b24e38
	I1107 09:12:10.214220    9678 command_runner.go:130] > 24b8c9ce80d6
	I1107 09:12:10.214225    9678 command_runner.go:130] > e67b00fac7f3
	I1107 09:12:10.214230    9678 command_runner.go:130] > 0301a36d3c5b
	I1107 09:12:10.214234    9678 command_runner.go:130] > 1d0a21243dfd
	I1107 09:12:10.214237    9678 command_runner.go:130] > c2650598bf53
	I1107 09:12:10.214241    9678 command_runner.go:130] > ca7fb2d58b7c
	I1107 09:12:10.214244    9678 command_runner.go:130] > a63fcdcf8012
	I1107 09:12:10.216363    9678 docker.go:444] Stopping containers: [3bf8f38c0aaf f8d0de33debd 627de0fec15d bea8197d27a3 fcff0a2c79bd 99867f7318f3 e521a3a86451 2141f6b1f9b0 5d163fc295ef 6532ace61a77 309af6aa7d07 99e10e3b23a1 1244b6d56687 35468c6f4808 44f3aabcd4fb 8c41e71be632 f786068f3c1d 0c7686c51f0a 23e28a639e24 24731ab856d5 08c54785a74e bce4e7cd7c5d 729a721b15ce a9009c3f6cd2 30cc23b24e38 24b8c9ce80d6 e67b00fac7f3 0301a36d3c5b 1d0a21243dfd c2650598bf53 ca7fb2d58b7c a63fcdcf8012]
	I1107 09:12:10.216463    9678 ssh_runner.go:195] Run: docker stop 3bf8f38c0aaf f8d0de33debd 627de0fec15d bea8197d27a3 fcff0a2c79bd 99867f7318f3 e521a3a86451 2141f6b1f9b0 5d163fc295ef 6532ace61a77 309af6aa7d07 99e10e3b23a1 1244b6d56687 35468c6f4808 44f3aabcd4fb 8c41e71be632 f786068f3c1d 0c7686c51f0a 23e28a639e24 24731ab856d5 08c54785a74e bce4e7cd7c5d 729a721b15ce a9009c3f6cd2 30cc23b24e38 24b8c9ce80d6 e67b00fac7f3 0301a36d3c5b 1d0a21243dfd c2650598bf53 ca7fb2d58b7c a63fcdcf8012
	I1107 09:12:10.237555    9678 command_runner.go:130] > 3bf8f38c0aaf
	I1107 09:12:10.237586    9678 command_runner.go:130] > f8d0de33debd
	I1107 09:12:10.237595    9678 command_runner.go:130] > 627de0fec15d
	I1107 09:12:10.238078    9678 command_runner.go:130] > bea8197d27a3
	I1107 09:12:10.238088    9678 command_runner.go:130] > fcff0a2c79bd
	I1107 09:12:10.238091    9678 command_runner.go:130] > 99867f7318f3
	I1107 09:12:10.238095    9678 command_runner.go:130] > e521a3a86451
	I1107 09:12:10.238103    9678 command_runner.go:130] > 2141f6b1f9b0
	I1107 09:12:10.238109    9678 command_runner.go:130] > 5d163fc295ef
	I1107 09:12:10.238119    9678 command_runner.go:130] > 6532ace61a77
	I1107 09:12:10.238123    9678 command_runner.go:130] > 309af6aa7d07
	I1107 09:12:10.238454    9678 command_runner.go:130] > 99e10e3b23a1
	I1107 09:12:10.238464    9678 command_runner.go:130] > 1244b6d56687
	I1107 09:12:10.238470    9678 command_runner.go:130] > 35468c6f4808
	I1107 09:12:10.238476    9678 command_runner.go:130] > 44f3aabcd4fb
	I1107 09:12:10.238824    9678 command_runner.go:130] > 8c41e71be632
	I1107 09:12:10.238840    9678 command_runner.go:130] > f786068f3c1d
	I1107 09:12:10.238860    9678 command_runner.go:130] > 0c7686c51f0a
	I1107 09:12:10.238868    9678 command_runner.go:130] > 23e28a639e24
	I1107 09:12:10.238878    9678 command_runner.go:130] > 24731ab856d5
	I1107 09:12:10.238889    9678 command_runner.go:130] > 08c54785a74e
	I1107 09:12:10.238895    9678 command_runner.go:130] > bce4e7cd7c5d
	I1107 09:12:10.238901    9678 command_runner.go:130] > 729a721b15ce
	I1107 09:12:10.238906    9678 command_runner.go:130] > a9009c3f6cd2
	I1107 09:12:10.239062    9678 command_runner.go:130] > 30cc23b24e38
	I1107 09:12:10.239072    9678 command_runner.go:130] > 24b8c9ce80d6
	I1107 09:12:10.239077    9678 command_runner.go:130] > e67b00fac7f3
	I1107 09:12:10.239083    9678 command_runner.go:130] > 0301a36d3c5b
	I1107 09:12:10.239088    9678 command_runner.go:130] > 1d0a21243dfd
	I1107 09:12:10.239098    9678 command_runner.go:130] > c2650598bf53
	I1107 09:12:10.239104    9678 command_runner.go:130] > ca7fb2d58b7c
	I1107 09:12:10.239109    9678 command_runner.go:130] > a63fcdcf8012
	I1107 09:12:10.241646    9678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 09:12:10.251660    9678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:12:10.258161    9678 command_runner.go:130] > -rw------- 1 root root 5639 Nov  7 17:06 /etc/kubernetes/admin.conf
	I1107 09:12:10.258172    9678 command_runner.go:130] > -rw------- 1 root root 5652 Nov  7 17:10 /etc/kubernetes/controller-manager.conf
	I1107 09:12:10.258177    9678 command_runner.go:130] > -rw------- 1 root root 2003 Nov  7 17:07 /etc/kubernetes/kubelet.conf
	I1107 09:12:10.258183    9678 command_runner.go:130] > -rw------- 1 root root 5600 Nov  7 17:10 /etc/kubernetes/scheduler.conf
	I1107 09:12:10.258979    9678 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  7 17:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  7 17:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Nov  7 17:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  7 17:10 /etc/kubernetes/scheduler.conf
	
	I1107 09:12:10.259034    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 09:12:10.266039    9678 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1107 09:12:10.266712    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 09:12:10.273301    9678 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1107 09:12:10.273964    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 09:12:10.280757    9678 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.280827    9678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 09:12:10.287570    9678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 09:12:10.294481    9678 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:12:10.294540    9678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 09:12:10.301221    9678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:12:10.308491    9678 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 09:12:10.308502    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:10.349862    9678 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:12:10.350022    9678 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1107 09:12:10.350451    9678 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1107 09:12:10.350783    9678 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 09:12:10.351266    9678 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1107 09:12:10.351603    9678 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1107 09:12:10.351876    9678 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1107 09:12:10.352283    9678 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1107 09:12:10.352788    9678 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1107 09:12:10.353111    9678 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 09:12:10.353371    9678 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 09:12:10.353743    9678 command_runner.go:130] > [certs] Using the existing "sa" key
	I1107 09:12:10.356916    9678 command_runner.go:130] ! W1107 17:12:10.357189    1124 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:10.356939    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:10.397976    9678 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:12:10.443099    9678 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1107 09:12:10.788925    9678 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1107 09:12:10.882805    9678 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:12:11.046975    9678 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:12:11.050621    9678 command_runner.go:130] ! W1107 17:12:10.405606    1134 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.050643    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:11.102173    9678 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:12:11.103072    9678 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:12:11.103081    9678 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 09:12:11.176587    9678 command_runner.go:130] ! W1107 17:12:11.100178    1156 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.176610    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:11.220088    9678 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:12:11.220108    9678 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:12:11.224443    9678 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:12:11.225774    9678 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:12:11.230257    9678 command_runner.go:130] ! W1107 17:12:11.226990    1191 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.230279    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:11.306409    9678 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:12:11.312835    9678 command_runner.go:130] ! W1107 17:12:11.312909    1204 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:11.312878    9678 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:12:11.312965    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:11.824988    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:12.325318    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:12.823100    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:13.324050    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:13.340140    9678 command_runner.go:130] > 1791
	I1107 09:12:13.340173    9678 api_server.go:71] duration metric: took 2.027253262s to wait for apiserver process to appear ...
	I1107 09:12:13.340183    9678 api_server.go:87] waiting for apiserver healthz status ...
	I1107 09:12:13.340199    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:16.540783    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 09:12:16.540799    9678 api_server.go:102] status: https://127.0.0.1:51429/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 09:12:17.040907    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:17.046838    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:12:17.046852    9678 api_server.go:102] status: https://127.0.0.1:51429/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:12:17.541105    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:17.547168    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:12:17.547256    9678 api_server.go:102] status: https://127.0.0.1:51429/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:12:18.040932    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:18.047006    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 200:
	ok
	I1107 09:12:18.047071    9678 round_trippers.go:463] GET https://127.0.0.1:51429/version
	I1107 09:12:18.047076    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:18.047084    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:18.047091    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:18.053718    9678 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 09:12:18.053731    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:18.053738    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:18.053745    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:18.053751    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:18.053758    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:18.053763    9678 round_trippers.go:580]     Content-Length: 263
	I1107 09:12:18.053768    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:18 GMT
	I1107 09:12:18.053773    9678 round_trippers.go:580]     Audit-Id: de192e3b-6d06-4094-b260-e1922c1fe08c
	I1107 09:12:18.053795    9678 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 09:12:18.053845    9678 api_server.go:140] control plane version: v1.25.3
	I1107 09:12:18.053856    9678 api_server.go:130] duration metric: took 4.71354866s to wait for apiserver health ...
	I1107 09:12:18.053861    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:18.053866    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:18.078589    9678 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 09:12:18.114381    9678 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 09:12:18.119212    9678 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 09:12:18.119228    9678 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I1107 09:12:18.119236    9678 command_runner.go:130] > Device: 8fh/143d	Inode: 1185203     Links: 1
	I1107 09:12:18.119245    9678 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 09:12:18.119250    9678 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I1107 09:12:18.119254    9678 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I1107 09:12:18.119258    9678 command_runner.go:130] > Change: 2022-11-07 16:45:45.185426543 +0000
	I1107 09:12:18.119261    9678 command_runner.go:130] >  Birth: -
	I1107 09:12:18.119446    9678 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1107 09:12:18.119455    9678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1107 09:12:18.132480    9678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 09:12:19.009788    9678 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 09:12:19.011658    9678 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 09:12:19.013763    9678 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 09:12:19.023160    9678 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 09:12:19.089859    9678 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 09:12:19.089964    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:19.089975    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.089988    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.090002    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.096455    9678 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 09:12:19.096500    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.096518    9678 round_trippers.go:580]     Audit-Id: 397371af-91b0-4004-bfed-550f6679f948
	I1107 09:12:19.096528    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.096537    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.096546    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.096555    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.096566    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.097707    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"993"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85260 chars]
	I1107 09:12:19.100742    9678 system_pods.go:59] 12 kube-system pods found
	I1107 09:12:19.100760    9678 system_pods.go:61] "coredns-565d847f94-54csh" [6e280b18-683c-4888-93db-3756e665d1f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 09:12:19.100765    9678 system_pods.go:61] "etcd-multinode-090641" [b5cec8d5-21cb-4a1e-a05a-92b541499e1c] Running
	I1107 09:12:19.100772    9678 system_pods.go:61] "kindnet-5d6kd" [be85ead0-4248-490e-a8fc-2a92f78801f3] Running
	I1107 09:12:19.100775    9678 system_pods.go:61] "kindnet-mgtrp" [e8094b6c-54ad-4f87-aaf3-88dc5155b128] Running
	I1107 09:12:19.100778    9678 system_pods.go:61] "kindnet-nx5lb" [3021a22e-37f1-40d1-9205-1abfb03e58a9] Running
	I1107 09:12:19.100782    9678 system_pods.go:61] "kube-apiserver-multinode-090641" [3ae5af06-6458-4954-a296-a43002732bf4] Running
	I1107 09:12:19.100787    9678 system_pods.go:61] "kube-controller-manager-multinode-090641" [1c2584e6-6b2e-4c67-aea4-7c5568355345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 09:12:19.100793    9678 system_pods.go:61] "kube-proxy-hxglr" [64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff] Running
	I1107 09:12:19.100797    9678 system_pods.go:61] "kube-proxy-nwck5" [017b9de2-3593-4e50-9493-7d14c0b994ce] Running
	I1107 09:12:19.100800    9678 system_pods.go:61] "kube-proxy-rqnqb" [f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d] Running
	I1107 09:12:19.100804    9678 system_pods.go:61] "kube-scheduler-multinode-090641" [76a48883-135f-49f5-831d-d0182408b2ca] Running
	I1107 09:12:19.100808    9678 system_pods.go:61] "storage-provisioner" [29595449-7701-47e2-af62-0638177bb673] Running
	I1107 09:12:19.100811    9678 system_pods.go:74] duration metric: took 10.937241ms to wait for pod list to return data ...
	I1107 09:12:19.100816    9678 node_conditions.go:102] verifying NodePressure condition ...
	I1107 09:12:19.100862    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes
	I1107 09:12:19.100866    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.100873    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.100878    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.103251    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.103263    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.103269    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.103273    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.103278    9678 round_trippers.go:580]     Audit-Id: 8af27a4b-79fa-40c0-b790-a04e27530aa3
	I1107 09:12:19.103283    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.103287    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.103293    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.103379    9678 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"994"},"items":[{"metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10902 chars]
	I1107 09:12:19.103882    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:19.103898    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:19.103911    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:19.103915    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:19.103918    9678 node_conditions.go:105] duration metric: took 3.099375ms to run NodePressure ...
	I1107 09:12:19.103932    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:12:19.312337    9678 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1107 09:12:19.392061    9678 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1107 09:12:19.396216    9678 command_runner.go:130] ! W1107 17:12:19.214687    2560 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:19.396237    9678 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 09:12:19.396291    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1107 09:12:19.396297    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.396305    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.396314    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.400258    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:19.400277    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.400285    9678 round_trippers.go:580]     Audit-Id: cd3d56be-acf4-46f4-9dd3-30f04e0291c6
	I1107 09:12:19.400292    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.400298    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.400303    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.400307    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.400313    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.400543    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"999"},"items":[{"metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"781","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30654 chars]
	I1107 09:12:19.401480    9678 kubeadm.go:778] kubelet initialised
	I1107 09:12:19.401491    9678 kubeadm.go:779] duration metric: took 5.24592ms waiting for restarted kubelet to initialise ...
	I1107 09:12:19.401499    9678 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:19.401545    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:19.401552    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.401561    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.401570    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.405641    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:19.405657    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.405666    9678 round_trippers.go:580]     Audit-Id: 6eb9d3d7-d64b-4d3d-aac4-59340e60c1b3
	I1107 09:12:19.405673    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.405680    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.405688    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.405717    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.405730    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.406801    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"999"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85452 chars]
	I1107 09:12:19.408982    9678 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:19.409025    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:19.409030    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.409037    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.409045    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.411672    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.411687    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.411695    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.411701    9678 round_trippers.go:580]     Audit-Id: d001b521-c6a1-48d1-9ae5-0ec8d5c1f79a
	I1107 09:12:19.411707    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.411712    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.411717    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.411723    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.411805    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:19.412137    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:19.412145    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.412151    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.412158    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.414552    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.414572    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.414583    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.414594    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.414605    9678 round_trippers.go:580]     Audit-Id: aa418369-6369-4ab3-87b0-9ac47bbc2ba9
	I1107 09:12:19.414615    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.414624    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.414659    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.414736    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:19.917252    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:19.917278    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.917291    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.917301    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.921124    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:19.921140    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.921148    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.921156    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.921182    9678 round_trippers.go:580]     Audit-Id: eae88b3f-c227-4d63-946f-7e31c757ebec
	I1107 09:12:19.921196    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.921206    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.921212    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.921320    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:19.921697    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:19.921710    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:19.921718    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:19.921725    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:19.923803    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:19.923811    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:19.923817    9678 round_trippers.go:580]     Audit-Id: 05634058-30e3-4919-bcd9-b392654e6282
	I1107 09:12:19.923821    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:19.923827    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:19.923831    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:19.923835    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:19.923840    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:19 GMT
	I1107 09:12:19.923888    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:20.415402    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:20.415415    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.415421    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.415430    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.417606    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:20.417616    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.417621    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.417626    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.417631    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.417636    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.417640    9678 round_trippers.go:580]     Audit-Id: 2f35f98d-5b52-4d56-aa92-ab95ed7e61eb
	I1107 09:12:20.417647    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.418028    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:20.418323    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:20.418330    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.418336    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.418355    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.420570    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:20.420579    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.420585    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.420590    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.420597    9678 round_trippers.go:580]     Audit-Id: b6a2acec-7621-4852-bf73-6c6a2d52568a
	I1107 09:12:20.420602    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.420607    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.420611    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.420656    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:20.917245    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:20.917271    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.917283    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.917293    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.921138    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:20.921156    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.921167    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.921177    9678 round_trippers.go:580]     Audit-Id: 26be362b-6659-4783-9fff-92f3f159340b
	I1107 09:12:20.921185    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.921197    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.921204    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.921228    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.921446    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:20.921841    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:20.921847    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:20.921853    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:20.921859    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:20.923740    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:20.923750    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:20.923755    9678 round_trippers.go:580]     Audit-Id: 7aa08aa7-03ba-473f-abc0-8854dee983bb
	I1107 09:12:20.923760    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:20.923769    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:20.923774    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:20.923779    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:20.923784    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:20 GMT
	I1107 09:12:20.923833    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:21.415374    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:21.415392    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.415400    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.415405    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.418316    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:21.418331    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.418338    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.418343    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.418347    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.418358    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.418366    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.418373    9678 round_trippers.go:580]     Audit-Id: a12fab9f-22ee-4c4e-b224-cebdf2b59223
	I1107 09:12:21.418468    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:21.418818    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:21.418826    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.418833    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.418838    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.421497    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:21.421508    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.421513    9678 round_trippers.go:580]     Audit-Id: 3334edd7-34d9-483d-9e19-56a34b7eb4b0
	I1107 09:12:21.421518    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.421523    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.421528    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.421535    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.421541    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.421778    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:21.421988    9678 pod_ready.go:102] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"False"
	I1107 09:12:21.917345    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:21.917366    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.917379    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.917390    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.921254    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:21.921271    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.921279    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.921287    9678 round_trippers.go:580]     Audit-Id: 1f11a7b5-2751-4f8f-92e0-50d3e71aedcd
	I1107 09:12:21.921294    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.921301    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.921314    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.921326    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.921426    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"985","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6602 chars]
	I1107 09:12:21.921809    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:21.921818    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:21.921843    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:21.921849    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:21.924169    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:21.924179    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:21.924184    9678 round_trippers.go:580]     Audit-Id: b9ec301c-f596-49b5-951f-3c1ec88b77cd
	I1107 09:12:21.924191    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:21.924199    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:21.924204    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:21.924208    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:21.924214    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:21 GMT
	I1107 09:12:21.924375    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:22.417053    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:22.417073    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.417087    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.417109    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.420912    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:22.420925    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.420932    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.420939    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.420946    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.420953    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.420961    9678 round_trippers.go:580]     Audit-Id: b9e74233-b2b2-4d47-a543-3c04d87aa9da
	I1107 09:12:22.420968    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.421039    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:22.421370    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:22.421377    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.421383    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.421388    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.423284    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:22.423293    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.423299    9678 round_trippers.go:580]     Audit-Id: 8d59a573-b0c5-4909-a555-5bff75e067f6
	I1107 09:12:22.423304    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.423309    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.423315    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.423319    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.423324    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.423369    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:22.915365    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:22.915387    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.915399    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.915409    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.919071    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:22.919081    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.919086    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.919091    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.919096    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.919113    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.919123    9678 round_trippers.go:580]     Audit-Id: 7460eae7-03b4-4928-a838-088790c91139
	I1107 09:12:22.919128    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.919288    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:22.919575    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:22.919581    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:22.919587    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:22.919594    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:22.921408    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:22.921416    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:22.921421    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:22.921426    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:22 GMT
	I1107 09:12:22.921432    9678 round_trippers.go:580]     Audit-Id: de7b3a57-b001-4baf-9573-29142384cfd2
	I1107 09:12:22.921436    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:22.921441    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:22.921446    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:22.921780    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:23.415241    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:23.415253    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.415263    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.415270    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.418664    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:23.418677    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.418684    9678 round_trippers.go:580]     Audit-Id: 23b1a853-521a-48e2-a07e-866fea668734
	I1107 09:12:23.418689    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.418694    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.418698    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.418704    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.418708    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.418775    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:23.419079    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:23.419085    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.419092    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.419097    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.420890    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:23.420900    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.420905    9678 round_trippers.go:580]     Audit-Id: a8ae45f5-8573-4d36-b67d-30d2a1a59c10
	I1107 09:12:23.420910    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.420916    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.420920    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.420925    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.420930    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.420971    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:23.915525    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:23.915548    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.915561    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.915571    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.919402    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:23.919420    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.919444    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.919449    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.919454    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.919462    9678 round_trippers.go:580]     Audit-Id: f4220a37-7692-4e36-8e45-73a04b227e34
	I1107 09:12:23.919468    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.919474    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.919543    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:23.919827    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:23.919833    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:23.919839    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:23.919844    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:23.921863    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:23.921873    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:23.921883    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:23.921888    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:23 GMT
	I1107 09:12:23.921892    9678 round_trippers.go:580]     Audit-Id: b7206d8e-45bb-4234-800a-b106be45c3a6
	I1107 09:12:23.921897    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:23.921902    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:23.921907    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:23.922028    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:23.922221    9678 pod_ready.go:102] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"False"
	I1107 09:12:24.417350    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:24.417371    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.417383    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.417392    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.421167    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:24.421182    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.421190    9678 round_trippers.go:580]     Audit-Id: 12cc1d43-769c-4c64-bc2f-d6f89bc7fafb
	I1107 09:12:24.421197    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.421206    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.421212    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.421219    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.421225    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.421309    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:24.421704    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:24.421710    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.421717    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.421723    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.423518    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:24.423528    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.423533    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.423538    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.423543    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.423548    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.423553    9678 round_trippers.go:580]     Audit-Id: 1b8cc09e-65e5-45b3-83eb-ca889abdf9a4
	I1107 09:12:24.423557    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.423793    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:24.917447    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:24.917471    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.917485    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.917495    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.921259    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:24.921274    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.921290    9678 round_trippers.go:580]     Audit-Id: 0e8ed72a-9c08-4be2-b146-c879b3c5a1df
	I1107 09:12:24.921298    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.921304    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.921310    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.921316    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.921326    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.921811    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1022","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6780 chars]
	I1107 09:12:24.922113    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:24.922120    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:24.922126    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:24.922131    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:24.924216    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:24.924226    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:24.924232    9678 round_trippers.go:580]     Audit-Id: 2cb82c29-1f5d-4fe2-8b27-b9c39d44afc0
	I1107 09:12:24.924237    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:24.924243    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:24.924255    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:24.924261    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:24.924266    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:24 GMT
	I1107 09:12:24.924317    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:25.415277    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:25.415291    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.415298    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.415303    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.417529    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:25.417539    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.417546    9678 round_trippers.go:580]     Audit-Id: 3fc735f4-2f6c-4700-a551-fecb424ae2be
	I1107 09:12:25.417550    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.417556    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.417560    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.417567    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.417572    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.417631    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6551 chars]
	I1107 09:12:25.417919    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:25.417926    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.417932    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.417938    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.420138    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:25.420147    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.420153    9678 round_trippers.go:580]     Audit-Id: bd8c627c-6526-421a-b738-46ce8b090881
	I1107 09:12:25.420157    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.420163    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.420167    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.420171    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.420177    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.420375    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:25.420559    9678 pod_ready.go:92] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:25.420570    9678 pod_ready.go:81] duration metric: took 6.011424402s waiting for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:25.420578    9678 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:25.420607    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:25.420611    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.420617    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.420623    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.422475    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:25.422487    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.422495    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.422504    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.422511    9678 round_trippers.go:580]     Audit-Id: c7f22e37-9721-4e63-bcf8-beac26a24639
	I1107 09:12:25.422518    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.422523    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.422530    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.422725    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:25.422989    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:25.422996    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.423001    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.423007    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.424720    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:25.424734    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.424751    9678 round_trippers.go:580]     Audit-Id: 2c3af5b2-ca15-47d5-9892-eef03b298c72
	I1107 09:12:25.424759    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.424764    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.424769    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.424774    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.424779    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.424835    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:25.927364    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:25.927387    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.927400    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.927411    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.931228    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:25.931242    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.931250    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.931257    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.931263    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.931270    9678 round_trippers.go:580]     Audit-Id: a4d78318-c992-4b1e-9169-8115b9b65794
	I1107 09:12:25.931276    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.931282    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.931380    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:25.931710    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:25.931719    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:25.931727    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:25.931735    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:25.933528    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:25.933536    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:25.933541    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:25.933546    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:25.933550    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:25 GMT
	I1107 09:12:25.933557    9678 round_trippers.go:580]     Audit-Id: 72a8439c-7b5d-4bc5-80b2-42653ce9383e
	I1107 09:12:25.933562    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:25.933567    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:25.933603    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:26.426051    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:26.426077    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.426181    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.426192    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.430111    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:26.430126    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.430134    9678 round_trippers.go:580]     Audit-Id: 2e0c7db8-4251-44c3-8c66-acea90823aa8
	I1107 09:12:26.430146    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.430154    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.430164    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.430173    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.430180    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.430245    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:26.430579    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:26.430585    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.430591    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.430596    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.432361    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:26.432372    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.432380    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.432389    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.432398    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.432404    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.432420    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.432427    9678 round_trippers.go:580]     Audit-Id: a6b34a2d-3e68-4129-b863-20ec047a2aee
	I1107 09:12:26.432661    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:26.927346    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:26.927370    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.927383    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.927393    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.931132    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:26.931147    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.931156    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.931164    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.931171    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.931180    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.931187    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.931194    9678 round_trippers.go:580]     Audit-Id: dc7d4c62-6fe6-413c-8837-7d843adbc531
	I1107 09:12:26.931265    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:26.931596    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:26.931605    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:26.931613    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:26.931620    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:26.933521    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:26.933530    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:26.933535    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:26.933541    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:26 GMT
	I1107 09:12:26.933546    9678 round_trippers.go:580]     Audit-Id: b5fa065e-90e7-4580-a904-3dbe40fd407a
	I1107 09:12:26.933551    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:26.933555    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:26.933559    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:26.933714    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:27.427273    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:27.427299    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.427311    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.427321    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.431209    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:27.431225    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.431233    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.431240    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.431246    9678 round_trippers.go:580]     Audit-Id: cf647470-e446-4afb-a4d7-66ffba7f07e0
	I1107 09:12:27.431253    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.431260    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.431267    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.431329    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:27.432257    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:27.432270    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.432283    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.432295    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.434423    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:27.434433    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.434439    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.434444    9678 round_trippers.go:580]     Audit-Id: 679269e4-13ca-4601-8cd9-10e2eb1c6dbd
	I1107 09:12:27.434449    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.434453    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.434458    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.434462    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.434506    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:27.434684    9678 pod_ready.go:102] pod "etcd-multinode-090641" in "kube-system" namespace has status "Ready":"False"
	I1107 09:12:27.925431    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:27.925452    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.925465    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.925474    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.928902    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:27.928916    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.928922    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.928928    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.928932    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.928937    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.928942    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.928947    9678 round_trippers.go:580]     Audit-Id: 4926074e-7742-4b12-bea4-be0a73066e84
	I1107 09:12:27.929003    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:27.929262    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:27.929270    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:27.929276    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:27.929281    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:27.931160    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:27.931170    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:27.931176    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:27 GMT
	I1107 09:12:27.931181    9678 round_trippers.go:580]     Audit-Id: 26a82c72-96ad-4070-9efe-a104e35e8c53
	I1107 09:12:27.931187    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:27.931192    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:27.931197    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:27.931201    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:27.931249    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:28.427433    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:28.427455    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.427467    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.427477    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.431294    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:28.431309    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.431317    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.431324    9678 round_trippers.go:580]     Audit-Id: b8ff3c0c-6e09-4d71-8f00-e7e9c500addf
	I1107 09:12:28.431331    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.431338    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.431345    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.431351    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.431412    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:28.431732    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:28.431738    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.431744    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.431761    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.433544    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:28.433553    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.433559    9678 round_trippers.go:580]     Audit-Id: 3449e1d9-2deb-4d08-b909-346811b9b8c8
	I1107 09:12:28.433565    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.433570    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.433574    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.433579    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.433585    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.433619    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:28.927318    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:28.927344    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.927361    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.927465    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.930854    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:28.930871    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.930882    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.930893    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.930914    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.930926    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.930945    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.930969    9678 round_trippers.go:580]     Audit-Id: e085b733-447d-4aed-aa89-68068a2d6652
	I1107 09:12:28.931265    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1004","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6268 chars]
	I1107 09:12:28.931606    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:28.931629    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:28.931635    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:28.931641    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:28.933268    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:28.933278    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:28.933283    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:28.933288    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:28 GMT
	I1107 09:12:28.933294    9678 round_trippers.go:580]     Audit-Id: cf0d4e81-8f3b-4bf2-8c06-bad1dd239f3e
	I1107 09:12:28.933314    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:28.933326    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:28.933334    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:28.933576    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.425201    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:29.425217    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.425224    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.425230    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.427952    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.427964    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.427970    9678 round_trippers.go:580]     Audit-Id: 6b74d881-e10c-4e57-bd17-881874e0f504
	I1107 09:12:29.427981    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.427987    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.427991    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.427996    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.428000    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.428051    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1070","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6044 chars]
	I1107 09:12:29.428305    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.428312    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.428318    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.428324    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.430023    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.430034    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.430042    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.430047    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.430053    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.430058    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.430064    9678 round_trippers.go:580]     Audit-Id: 0260119d-7e64-4e91-8266-b7275e9fe925
	I1107 09:12:29.430069    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.430349    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.430565    9678 pod_ready.go:92] pod "etcd-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.430576    9678 pod_ready.go:81] duration metric: took 4.009889972s waiting for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.430587    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.430616    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-090641
	I1107 09:12:29.430621    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.430627    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.430632    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.432726    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.432736    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.432742    9678 round_trippers.go:580]     Audit-Id: b226ecbb-f322-481d-9d3c-9d1a0d8a2bb9
	I1107 09:12:29.432747    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.432752    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.432757    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.432763    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.432769    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.432823    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-090641","namespace":"kube-system","uid":"3ae5af06-6458-4954-a296-a43002732bf4","resourceVersion":"1035","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.mirror":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853016Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8430 chars]
	I1107 09:12:29.433074    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.433080    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.433086    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.433091    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.435372    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.435382    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.435387    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.435392    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.435398    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.435404    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.435411    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.435418    9678 round_trippers.go:580]     Audit-Id: f216cb37-26d7-47be-9a9d-39d2d66c3533
	I1107 09:12:29.435468    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.435659    9678 pod_ready.go:92] pod "kube-apiserver-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.435666    9678 pod_ready.go:81] duration metric: took 5.073553ms waiting for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.435672    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.435697    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-090641
	I1107 09:12:29.435703    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.435710    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.435717    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.437746    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.437757    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.437764    9678 round_trippers.go:580]     Audit-Id: 4eabb133-e54d-41fe-8d32-a4ba700ef567
	I1107 09:12:29.437772    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.437778    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.437785    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.437792    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.437799    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.438159    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-090641","namespace":"kube-system","uid":"1c2584e6-6b2e-4c67-aea4-7c5568355345","resourceVersion":"1050","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.mirror":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1107 09:12:29.438431    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.438438    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.438446    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.438452    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.440250    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.440258    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.440264    9678 round_trippers.go:580]     Audit-Id: ec203f5c-6933-4cc2-8b06-1ff229bc07f8
	I1107 09:12:29.440268    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.440273    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.440278    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.440284    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.440295    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.440336    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.440508    9678 pod_ready.go:92] pod "kube-controller-manager-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.440515    9678 pod_ready.go:81] duration metric: took 4.838656ms waiting for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.440522    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.440546    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-hxglr
	I1107 09:12:29.440550    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.440556    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.440562    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.442489    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.442498    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.442503    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.442509    9678 round_trippers.go:580]     Audit-Id: 90d2d8d2-bfb7-4ef4-b22a-a44928524ec6
	I1107 09:12:29.442514    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.442519    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.442524    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.442529    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.442575    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff","resourceVersion":"846","creationTimestamp":"2022-11-07T17:07:43Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1107 09:12:29.442792    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:29.442798    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.442804    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.442810    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.444671    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.444681    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.444686    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.444691    9678 round_trippers.go:580]     Audit-Id: 16cb5476-f8e9-44ae-ab6f-dfed0eca938b
	I1107 09:12:29.444696    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.444701    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.444707    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.444711    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.445062    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641-m02","uid":"1d15109e-f0b8-4c7f-a0d6-c4e58cde1a91","resourceVersion":"858","creationTimestamp":"2022-11-07T17:10:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1107 09:12:29.445219    9678 pod_ready.go:92] pod "kube-proxy-hxglr" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.445225    9678 pod_ready.go:81] duration metric: took 4.69866ms waiting for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.445230    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.445254    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-nwck5
	I1107 09:12:29.445259    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.445264    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.445270    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.447018    9678 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 09:12:29.447027    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.447033    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.447039    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.447044    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.447048    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.447053    9678 round_trippers.go:580]     Audit-Id: 5ea3ae87-f747-4cc9-bb88-c02db4498193
	I1107 09:12:29.447057    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.447242    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nwck5","generateName":"kube-proxy-","namespace":"kube-system","uid":"017b9de2-3593-4e50-9493-7d14c0b994ce","resourceVersion":"945","creationTimestamp":"2022-11-07T17:08:26Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1107 09:12:29.447466    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m03
	I1107 09:12:29.447472    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.447478    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.447484    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.449006    9678 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1107 09:12:29.449014    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.449019    9678 round_trippers.go:580]     Audit-Id: 52b68d52-8c1b-4bc4-ba13-b914a3312665
	I1107 09:12:29.449023    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.449028    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.449033    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.449039    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.449043    9678 round_trippers.go:580]     Content-Length: 210
	I1107 09:12:29.449048    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.449057    9678 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-090641-m03\" not found","reason":"NotFound","details":{"name":"multinode-090641-m03","kind":"nodes"},"code":404}
	I1107 09:12:29.449156    9678 pod_ready.go:97] node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:29.449163    9678 pod_ready.go:81] duration metric: took 3.928728ms waiting for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	E1107 09:12:29.449168    9678 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:29.449174    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.626425    9678 request.go:614] Waited for 177.201575ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:29.626509    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:29.626519    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.626532    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.626542    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.630543    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:29.630558    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.630566    9678 round_trippers.go:580]     Audit-Id: 4c2b2093-b33e-48d5-b3b9-3d2a6264b1cc
	I1107 09:12:29.630572    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.630579    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.630586    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.630592    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.630599    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.630677    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rqnqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d","resourceVersion":"1029","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1107 09:12:29.826052    9678 request.go:614] Waited for 195.027314ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.826095    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:29.826104    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:29.826113    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:29.826121    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:29.828657    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:29.828668    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:29.828673    9678 round_trippers.go:580]     Audit-Id: bf3ef967-03df-401a-b943-c3c1834e40e8
	I1107 09:12:29.828680    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:29.828686    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:29.828691    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:29.828696    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:29.828701    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:29 GMT
	I1107 09:12:29.828830    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:29.829027    9678 pod_ready.go:92] pod "kube-proxy-rqnqb" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:29.829034    9678 pod_ready.go:81] duration metric: took 379.845663ms waiting for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:29.829040    9678 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:30.025357    9678 request.go:614] Waited for 196.266866ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:30.025493    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:30.025505    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.025517    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.025529    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.028893    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:30.028907    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.028916    9678 round_trippers.go:580]     Audit-Id: 79c57695-7aec-4ff0-b532-44532c205ce1
	I1107 09:12:30.028922    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.028930    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.028936    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.028942    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.028949    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.029017    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-090641","namespace":"kube-system","uid":"76a48883-135f-49f5-831d-d0182408b2ca","resourceVersion":"1041","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.mirror":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.seen":"2022-11-07T17:07:07.141854549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1107 09:12:30.225437    9678 request.go:614] Waited for 196.110666ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:30.225548    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:30.225559    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.225571    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.225581    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.229341    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:30.229357    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.229364    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.229373    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.229382    9678 round_trippers.go:580]     Audit-Id: 7ad65763-80b8-44e7-a4d6-ab7e5b6d03cc
	I1107 09:12:30.229390    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.229396    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.229402    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.229558    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:30.229791    9678 pod_ready.go:92] pod "kube-scheduler-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:30.229798    9678 pod_ready.go:81] duration metric: took 400.743057ms waiting for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:30.229804    9678 pod_ready.go:38] duration metric: took 10.828022896s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:30.229817    9678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 09:12:30.237682    9678 command_runner.go:130] > -16
	I1107 09:12:30.237696    9678 ops.go:34] apiserver oom_adj: -16
	I1107 09:12:30.237701    9678 kubeadm.go:631] restartCluster took 23.17473863s
	I1107 09:12:30.237709    9678 kubeadm.go:398] StartCluster complete in 23.205230176s
	I1107 09:12:30.237721    9678 settings.go:142] acquiring lock: {Name:mkacd69bfe5f4d7bab8b044c0ff487fe5c3f0cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:12:30.237811    9678 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:30.238187    9678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:12:30.238807    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:30.239001    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:30.239190    9678 round_trippers.go:463] GET https://127.0.0.1:51429/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 09:12:30.239196    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.239202    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.239208    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.241483    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:30.241492    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.241498    9678 round_trippers.go:580]     Content-Length: 292
	I1107 09:12:30.241505    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.241511    9678 round_trippers.go:580]     Audit-Id: 6260f10a-6484-4908-b402-a5848d4dfefa
	I1107 09:12:30.241515    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.241521    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.241525    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.241531    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.241542    9678 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bc5e3846-355c-4569-b48e-ed482b8ae45b","resourceVersion":"1078","creationTimestamp":"2022-11-07T17:07:07Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 09:12:30.241613    9678 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-090641" rescaled to 1
	I1107 09:12:30.241643    9678 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 09:12:30.241662    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 09:12:30.241690    9678 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1107 09:12:30.241811    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:30.283890    9678 addons.go:65] Setting storage-provisioner=true in profile "multinode-090641"
	I1107 09:12:30.283891    9678 addons.go:65] Setting default-storageclass=true in profile "multinode-090641"
	I1107 09:12:30.283754    9678 out.go:177] * Verifying Kubernetes components...
	I1107 09:12:30.283940    9678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-090641"
	I1107 09:12:30.283939    9678 addons.go:227] Setting addon storage-provisioner=true in "multinode-090641"
	W1107 09:12:30.305413    9678 addons.go:236] addon storage-provisioner should already be in state true
	I1107 09:12:30.305430    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:12:30.305478    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:30.305718    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:30.305827    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:30.352382    9678 command_runner.go:130] > apiVersion: v1
	I1107 09:12:30.352405    9678 command_runner.go:130] > data:
	I1107 09:12:30.352410    9678 command_runner.go:130] >   Corefile: |
	I1107 09:12:30.352413    9678 command_runner.go:130] >     .:53 {
	I1107 09:12:30.352417    9678 command_runner.go:130] >         errors
	I1107 09:12:30.352425    9678 command_runner.go:130] >         health {
	I1107 09:12:30.352433    9678 command_runner.go:130] >            lameduck 5s
	I1107 09:12:30.352438    9678 command_runner.go:130] >         }
	I1107 09:12:30.352443    9678 command_runner.go:130] >         ready
	I1107 09:12:30.352457    9678 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1107 09:12:30.352465    9678 command_runner.go:130] >            pods insecure
	I1107 09:12:30.352474    9678 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1107 09:12:30.352484    9678 command_runner.go:130] >            ttl 30
	I1107 09:12:30.352488    9678 command_runner.go:130] >         }
	I1107 09:12:30.352492    9678 command_runner.go:130] >         prometheus :9153
	I1107 09:12:30.352499    9678 command_runner.go:130] >         hosts {
	I1107 09:12:30.352508    9678 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I1107 09:12:30.352514    9678 command_runner.go:130] >            fallthrough
	I1107 09:12:30.352526    9678 command_runner.go:130] >         }
	I1107 09:12:30.352536    9678 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1107 09:12:30.352542    9678 command_runner.go:130] >            max_concurrent 1000
	I1107 09:12:30.352546    9678 command_runner.go:130] >         }
	I1107 09:12:30.352550    9678 command_runner.go:130] >         cache 30
	I1107 09:12:30.352553    9678 command_runner.go:130] >         loop
	I1107 09:12:30.352561    9678 command_runner.go:130] >         reload
	I1107 09:12:30.352568    9678 command_runner.go:130] >         loadbalance
	I1107 09:12:30.352572    9678 command_runner.go:130] >     }
	I1107 09:12:30.352576    9678 command_runner.go:130] > kind: ConfigMap
	I1107 09:12:30.352582    9678 command_runner.go:130] > metadata:
	I1107 09:12:30.352586    9678 command_runner.go:130] >   creationTimestamp: "2022-11-07T17:07:07Z"
	I1107 09:12:30.352592    9678 command_runner.go:130] >   name: coredns
	I1107 09:12:30.352596    9678 command_runner.go:130] >   namespace: kube-system
	I1107 09:12:30.352601    9678 command_runner.go:130] >   resourceVersion: "365"
	I1107 09:12:30.352608    9678 command_runner.go:130] >   uid: 8d2177f7-228b-4356-ad32-03ee101d8c94
	I1107 09:12:30.352714    9678 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 09:12:30.352829    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:30.369035    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:30.390272    9678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 09:12:30.390547    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:30.411365    9678 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 09:12:30.411384    9678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 09:12:30.411537    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:30.411758    9678 round_trippers.go:463] GET https://127.0.0.1:51429/apis/storage.k8s.io/v1/storageclasses
	I1107 09:12:30.411776    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.411789    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.412642    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.416093    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:30.416107    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.416113    9678 round_trippers.go:580]     Audit-Id: 7845e956-c9ff-4f97-8e84-cbaf393626c6
	I1107 09:12:30.416118    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.416122    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.416130    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.416136    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.416140    9678 round_trippers.go:580]     Content-Length: 1274
	I1107 09:12:30.416145    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.416196    9678 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"standard","uid":"3f887db3-d96f-4dbf-948f-c470c5720b23","resourceVersion":"378","creationTimestamp":"2022-11-07T17:07:21Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-07T17:07:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I1107 09:12:30.416657    9678 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3f887db3-d96f-4dbf-948f-c470c5720b23","resourceVersion":"378","creationTimestamp":"2022-11-07T17:07:21Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-07T17:07:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 09:12:30.416694    9678 round_trippers.go:463] PUT https://127.0.0.1:51429/apis/storage.k8s.io/v1/storageclasses/standard
	I1107 09:12:30.416699    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.416706    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.416711    9678 round_trippers.go:473]     Content-Type: application/json
	I1107 09:12:30.416716    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.421027    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:30.421046    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.421053    9678 round_trippers.go:580]     Content-Length: 1220
	I1107 09:12:30.421058    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.421063    9678 round_trippers.go:580]     Audit-Id: 27eaa177-dbd5-4d6b-a7a8-3e55d1f8b1cb
	I1107 09:12:30.421068    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.421073    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.421077    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.421082    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.421101    9678 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3f887db3-d96f-4dbf-948f-c470c5720b23","resourceVersion":"378","creationTimestamp":"2022-11-07T17:07:21Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-07T17:07:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 09:12:30.421172    9678 addons.go:227] Setting addon default-storageclass=true in "multinode-090641"
	W1107 09:12:30.421180    9678 addons.go:236] addon default-storageclass should already be in state true
	I1107 09:12:30.421195    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:30.421573    9678 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:30.423210    9678 node_ready.go:35] waiting up to 6m0s for node "multinode-090641" to be "Ready" ...
	I1107 09:12:30.425442    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:30.425450    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.425457    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.425462    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.427943    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:30.427962    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.427976    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.427985    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.427990    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.427995    9678 round_trippers.go:580]     Audit-Id: 9c5de483-8509-419e-aebd-205ae798f5b1
	I1107 09:12:30.428000    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.428005    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.428068    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:30.428346    9678 node_ready.go:49] node "multinode-090641" has status "Ready":"True"
	I1107 09:12:30.428355    9678 node_ready.go:38] duration metric: took 5.124126ms waiting for node "multinode-090641" to be "Ready" ...
	I1107 09:12:30.428365    9678 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:30.473525    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:30.480958    9678 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 09:12:30.480969    9678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 09:12:30.481050    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:30.537952    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:30.564924    9678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 09:12:30.625801    9678 request.go:614] Waited for 197.384346ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:30.625851    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:30.625857    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.625863    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.625870    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.628654    9678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 09:12:30.630227    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:30.630241    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.630247    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.630255    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.630262    9678 round_trippers.go:580]     Audit-Id: b2b99a63-956c-425d-8272-25441cdb5be8
	I1107 09:12:30.630268    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.630273    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.630278    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.631727    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1107 09:12:30.634066    9678 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:30.727528    9678 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1107 09:12:30.729314    9678 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1107 09:12:30.731810    9678 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1107 09:12:30.733453    9678 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1107 09:12:30.735252    9678 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1107 09:12:30.776061    9678 command_runner.go:130] > pod/storage-provisioner configured
	I1107 09:12:30.825343    9678 request.go:614] Waited for 191.221708ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:30.825386    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/coredns-565d847f94-54csh
	I1107 09:12:30.825392    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:30.825410    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:30.825420    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:30.828048    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:30.828062    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:30.828068    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:30 GMT
	I1107 09:12:30.828072    9678 round_trippers.go:580]     Audit-Id: 520a42f8-1e55-4ed3-800a-4a7ec27cdf34
	I1107 09:12:30.828078    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:30.828082    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:30.828087    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:30.828092    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:30.828156    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6551 chars]
	I1107 09:12:30.834978    9678 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1107 09:12:30.884287    9678 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 09:12:30.905405    9678 addons.go:488] enableAddons completed in 663.696406ms
	I1107 09:12:31.025727    9678 request.go:614] Waited for 197.257088ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.025827    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.025837    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.025850    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.025863    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.029840    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.029855    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.029864    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.029877    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.029893    9678 round_trippers.go:580]     Audit-Id: ce2e8a40-75ad-43aa-b6e8-97e16cd4546e
	I1107 09:12:31.029900    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.029907    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.029913    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.029983    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:31.030247    9678 pod_ready.go:92] pod "coredns-565d847f94-54csh" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:31.030254    9678 pod_ready.go:81] duration metric: took 396.163978ms waiting for pod "coredns-565d847f94-54csh" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.030260    9678 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.225645    9678 request.go:614] Waited for 195.273388ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:31.225703    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/etcd-multinode-090641
	I1107 09:12:31.225715    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.225727    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.225738    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.229467    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.229483    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.229491    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.229497    9678 round_trippers.go:580]     Audit-Id: 17435435-36d3-4071-b118-105361a31f15
	I1107 09:12:31.229503    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.229514    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.229523    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.229530    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.229834    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-090641","namespace":"kube-system","uid":"b5cec8d5-21cb-4a1e-a05a-92b541499e1c","resourceVersion":"1070","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.mirror":"cd4b4557d7e5dec75e05f1f21986673d","kubernetes.io/config.seen":"2022-11-07T17:07:07.141844103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6044 chars]
	I1107 09:12:31.425335    9678 request.go:614] Waited for 195.124509ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.425366    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.425372    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.425379    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.425384    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.427921    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:31.427934    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.427940    9678 round_trippers.go:580]     Audit-Id: 3cfb11ed-2642-4ee3-b45b-37800f1f820c
	I1107 09:12:31.427949    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.427955    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.427959    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.427964    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.427969    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.428115    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:31.428331    9678 pod_ready.go:92] pod "etcd-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:31.428339    9678 pod_ready.go:81] duration metric: took 398.064619ms waiting for pod "etcd-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.428349    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.627387    9678 request.go:614] Waited for 198.955792ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-090641
	I1107 09:12:31.627539    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-090641
	I1107 09:12:31.627552    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.627565    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.627575    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.631493    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.631511    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.631520    9678 round_trippers.go:580]     Audit-Id: aa3a36f2-a38b-4385-a40c-1454d9b56f21
	I1107 09:12:31.631527    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.631556    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.631567    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.631577    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.631584    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.631658    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-090641","namespace":"kube-system","uid":"3ae5af06-6458-4954-a296-a43002732bf4","resourceVersion":"1035","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.mirror":"dd75cb8a49e2d9527f374a354a8b7d88","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853016Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8430 chars]
	I1107 09:12:31.827031    9678 request.go:614] Waited for 195.035201ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.827138    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:31.827147    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:31.827162    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:31.827180    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:31.831080    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:31.831091    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:31.831097    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:31.831101    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:31.831108    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:31 GMT
	I1107 09:12:31.831113    9678 round_trippers.go:580]     Audit-Id: bbf099a6-fa76-417b-a18d-b655d18e0892
	I1107 09:12:31.831119    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:31.831123    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:31.831170    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:31.831383    9678 pod_ready.go:92] pod "kube-apiserver-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:31.831391    9678 pod_ready.go:81] duration metric: took 403.026408ms waiting for pod "kube-apiserver-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:31.831398    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.027368    9678 request.go:614] Waited for 195.899398ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-090641
	I1107 09:12:32.027544    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-090641
	I1107 09:12:32.027555    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.027566    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.027576    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.031473    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.031488    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.031495    9678 round_trippers.go:580]     Audit-Id: 7d17e35e-e8a9-405b-bc6a-4947c0070abf
	I1107 09:12:32.031502    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.031508    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.031516    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.031531    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.031539    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.031908    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-090641","namespace":"kube-system","uid":"1c2584e6-6b2e-4c67-aea4-7c5568355345","resourceVersion":"1050","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.mirror":"f4f4b8d09f56092bdb6c988421c46dbc","kubernetes.io/config.seen":"2022-11-07T17:07:07.141853863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1107 09:12:32.226376    9678 request.go:614] Waited for 194.121304ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:32.226422    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:32.226430    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.226442    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.226465    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.230349    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.230365    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.230373    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.230380    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.230387    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.230397    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.230404    9678 round_trippers.go:580]     Audit-Id: b60ecc82-8cad-49e5-aac6-3ee492825d0a
	I1107 09:12:32.230411    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.230485    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:32.230757    9678 pod_ready.go:92] pod "kube-controller-manager-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:32.230763    9678 pod_ready.go:81] duration metric: took 399.350714ms waiting for pod "kube-controller-manager-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.230770    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.427373    9678 request.go:614] Waited for 196.546616ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-hxglr
	I1107 09:12:32.427587    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-hxglr
	I1107 09:12:32.427602    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.427619    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.427647    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.431370    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.431385    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.431393    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.431399    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.431406    9678 round_trippers.go:580]     Audit-Id: e54da4ac-f9eb-45fa-9653-c35eb6cb3b42
	I1107 09:12:32.431413    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.431419    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.431426    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.431502    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff","resourceVersion":"846","creationTimestamp":"2022-11-07T17:07:43Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1107 09:12:32.625358    9678 request.go:614] Waited for 193.521833ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:32.625449    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:32.625457    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.625468    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.625490    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.629085    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.629098    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.629104    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.629109    9678 round_trippers.go:580]     Audit-Id: be85d5d8-54f0-49cf-8bd5-3fb5f000e9ba
	I1107 09:12:32.629114    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.629119    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.629126    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.629132    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.629229    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641-m02","uid":"1d15109e-f0b8-4c7f-a0d6-c4e58cde1a91","resourceVersion":"858","creationTimestamp":"2022-11-07T17:10:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:10:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1107 09:12:32.629407    9678 pod_ready.go:92] pod "kube-proxy-hxglr" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:32.629414    9678 pod_ready.go:81] duration metric: took 398.628333ms waiting for pod "kube-proxy-hxglr" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.629420    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:32.825363    9678 request.go:614] Waited for 195.900521ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-nwck5
	I1107 09:12:32.825468    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-nwck5
	I1107 09:12:32.825516    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:32.825529    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:32.825540    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:32.829492    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:32.829508    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:32.829516    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:32.829522    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:32.829529    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:32.829535    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:32.829543    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:32 GMT
	I1107 09:12:32.829555    9678 round_trippers.go:580]     Audit-Id: 938009c6-b676-4029-9896-067702d49676
	I1107 09:12:32.829638    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nwck5","generateName":"kube-proxy-","namespace":"kube-system","uid":"017b9de2-3593-4e50-9493-7d14c0b994ce","resourceVersion":"945","creationTimestamp":"2022-11-07T17:08:26Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:08:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1107 09:12:33.027503    9678 request.go:614] Waited for 197.432804ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m03
	I1107 09:12:33.027550    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m03
	I1107 09:12:33.027558    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.027570    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.027582    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.031425    9678 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1107 09:12:33.031445    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.031459    9678 round_trippers.go:580]     Audit-Id: 4e848c86-93e8-49b0-bbb6-7c281134cf31
	I1107 09:12:33.031470    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.031491    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.031519    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.031533    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.031543    9678 round_trippers.go:580]     Content-Length: 210
	I1107 09:12:33.031550    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.031571    9678 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-090641-m03\" not found","reason":"NotFound","details":{"name":"multinode-090641-m03","kind":"nodes"},"code":404}
	I1107 09:12:33.031642    9678 pod_ready.go:97] node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:33.031652    9678 pod_ready.go:81] duration metric: took 402.216594ms waiting for pod "kube-proxy-nwck5" in "kube-system" namespace to be "Ready" ...
	E1107 09:12:33.031660    9678 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-090641-m03" hosting pod "kube-proxy-nwck5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-090641-m03": nodes "multinode-090641-m03" not found
	I1107 09:12:33.031668    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.227338    9678 request.go:614] Waited for 195.621561ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:33.227479    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-proxy-rqnqb
	I1107 09:12:33.227489    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.227502    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.227513    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.231189    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:33.231206    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.231217    9678 round_trippers.go:580]     Audit-Id: 8ecd7478-24b3-4e1e-8c86-ec638699ea9c
	I1107 09:12:33.231228    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.231238    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.231252    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.231268    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.231283    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.231422    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rqnqb","generateName":"kube-proxy-","namespace":"kube-system","uid":"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d","resourceVersion":"1029","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"24ccc204-14dd-4551-b05e-811ba8bd745a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ccc204-14dd-4551-b05e-811ba8bd745a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1107 09:12:33.425418    9678 request.go:614] Waited for 193.613243ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.425463    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.425472    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.425481    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.425491    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.428237    9678 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 09:12:33.428247    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.428252    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.428257    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.428262    9678 round_trippers.go:580]     Audit-Id: 250d2cf1-e02b-43de-8019-2835fc6bbf00
	I1107 09:12:33.428267    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.428272    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.428276    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.428324    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:33.428523    9678 pod_ready.go:92] pod "kube-proxy-rqnqb" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:33.428530    9678 pod_ready.go:81] duration metric: took 396.846496ms waiting for pod "kube-proxy-rqnqb" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.428535    9678 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.626693    9678 request.go:614] Waited for 198.100809ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:33.626829    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-090641
	I1107 09:12:33.626845    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.626859    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.626872    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.630954    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:33.630971    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.630982    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.630991    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.631000    9678 round_trippers.go:580]     Audit-Id: 106278b3-bbd0-4a43-a8c2-d390b51f92b6
	I1107 09:12:33.631008    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.631014    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.631021    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.631162    9678 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-090641","namespace":"kube-system","uid":"76a48883-135f-49f5-831d-d0182408b2ca","resourceVersion":"1041","creationTimestamp":"2022-11-07T17:07:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.mirror":"ab8bd35a88b2fdd19251e7cd74d99137","kubernetes.io/config.seen":"2022-11-07T17:07:07.141854549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1107 09:12:33.825636    9678 request.go:614] Waited for 194.128254ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.825791    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes/multinode-090641
	I1107 09:12:33.825802    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.825814    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.825826    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.829974    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:33.829986    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.829992    9678 round_trippers.go:580]     Audit-Id: 7af5ba8c-3e7e-44ea-82f6-ecddeb79acad
	I1107 09:12:33.829997    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.830001    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.830006    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.830011    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.830015    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.830068    9678 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-07T17:07:03Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1107 09:12:33.830268    9678 pod_ready.go:92] pod "kube-scheduler-multinode-090641" in "kube-system" namespace has status "Ready":"True"
	I1107 09:12:33.830274    9678 pod_ready.go:81] duration metric: took 401.723735ms waiting for pod "kube-scheduler-multinode-090641" in "kube-system" namespace to be "Ready" ...
	I1107 09:12:33.830281    9678 pod_ready.go:38] duration metric: took 3.40181826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:12:33.830294    9678 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:12:33.830354    9678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:12:33.839672    9678 command_runner.go:130] > 1791
	I1107 09:12:33.840515    9678 api_server.go:71] duration metric: took 3.598766104s to wait for apiserver process to appear ...
	I1107 09:12:33.840524    9678 api_server.go:87] waiting for apiserver healthz status ...
	I1107 09:12:33.840535    9678 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51429/healthz ...
	I1107 09:12:33.845950    9678 api_server.go:278] https://127.0.0.1:51429/healthz returned 200:
	ok
	I1107 09:12:33.845979    9678 round_trippers.go:463] GET https://127.0.0.1:51429/version
	I1107 09:12:33.845984    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:33.845990    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:33.845996    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:33.846957    9678 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1107 09:12:33.846967    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:33.846972    9678 round_trippers.go:580]     Audit-Id: df77bf83-4269-4357-9553-7a5b9f6148e4
	I1107 09:12:33.846978    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:33.846983    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:33.846988    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:33.846993    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:33.846998    9678 round_trippers.go:580]     Content-Length: 263
	I1107 09:12:33.847002    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:33 GMT
	I1107 09:12:33.847012    9678 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 09:12:33.847042    9678 api_server.go:140] control plane version: v1.25.3
	I1107 09:12:33.847048    9678 api_server.go:130] duration metric: took 6.520078ms to wait for apiserver health ...
	I1107 09:12:33.847055    9678 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 09:12:34.025926    9678 request.go:614] Waited for 178.70746ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.025988    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.026000    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.026013    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.026024    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.031391    9678 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 09:12:34.031403    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.031409    9678 round_trippers.go:580]     Audit-Id: fb0ff7f8-a2c3-4df1-89cf-d50d8fadb2ee
	I1107 09:12:34.031434    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.031443    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.031448    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.031453    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.031458    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.032825    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1107 09:12:34.034785    9678 system_pods.go:59] 12 kube-system pods found
	I1107 09:12:34.034794    9678 system_pods.go:61] "coredns-565d847f94-54csh" [6e280b18-683c-4888-93db-3756e665d1f6] Running
	I1107 09:12:34.034798    9678 system_pods.go:61] "etcd-multinode-090641" [b5cec8d5-21cb-4a1e-a05a-92b541499e1c] Running
	I1107 09:12:34.034802    9678 system_pods.go:61] "kindnet-5d6kd" [be85ead0-4248-490e-a8fc-2a92f78801f3] Running
	I1107 09:12:34.034806    9678 system_pods.go:61] "kindnet-mgtrp" [e8094b6c-54ad-4f87-aaf3-88dc5155b128] Running
	I1107 09:12:34.034812    9678 system_pods.go:61] "kindnet-nx5lb" [3021a22e-37f1-40d1-9205-1abfb03e58a9] Running
	I1107 09:12:34.034817    9678 system_pods.go:61] "kube-apiserver-multinode-090641" [3ae5af06-6458-4954-a296-a43002732bf4] Running
	I1107 09:12:34.034821    9678 system_pods.go:61] "kube-controller-manager-multinode-090641" [1c2584e6-6b2e-4c67-aea4-7c5568355345] Running
	I1107 09:12:34.034824    9678 system_pods.go:61] "kube-proxy-hxglr" [64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff] Running
	I1107 09:12:34.034828    9678 system_pods.go:61] "kube-proxy-nwck5" [017b9de2-3593-4e50-9493-7d14c0b994ce] Running
	I1107 09:12:34.034832    9678 system_pods.go:61] "kube-proxy-rqnqb" [f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d] Running
	I1107 09:12:34.034835    9678 system_pods.go:61] "kube-scheduler-multinode-090641" [76a48883-135f-49f5-831d-d0182408b2ca] Running
	I1107 09:12:34.034839    9678 system_pods.go:61] "storage-provisioner" [29595449-7701-47e2-af62-0638177bb673] Running
	I1107 09:12:34.034843    9678 system_pods.go:74] duration metric: took 187.779435ms to wait for pod list to return data ...
	I1107 09:12:34.034848    9678 default_sa.go:34] waiting for default service account to be created ...
	I1107 09:12:34.225990    9678 request.go:614] Waited for 191.082557ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/default/serviceaccounts
	I1107 09:12:34.226189    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/default/serviceaccounts
	I1107 09:12:34.226202    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.226214    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.226224    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.230034    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:34.230047    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.230055    9678 round_trippers.go:580]     Content-Length: 262
	I1107 09:12:34.230061    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.230069    9678 round_trippers.go:580]     Audit-Id: 265e2aee-7e0b-445d-852a-fed21109d4b7
	I1107 09:12:34.230075    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.230082    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.230094    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.230101    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.230115    9678 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f1ae7bed-fcb2-4b82-ac97-3026f7395742","resourceVersion":"296","creationTimestamp":"2022-11-07T17:07:19Z"}}]}
	I1107 09:12:34.230276    9678 default_sa.go:45] found service account: "default"
	I1107 09:12:34.230291    9678 default_sa.go:55] duration metric: took 195.428822ms for default service account to be created ...
	I1107 09:12:34.230298    9678 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 09:12:34.427445    9678 request.go:614] Waited for 197.078914ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.427588    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/namespaces/kube-system/pods
	I1107 09:12:34.427601    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.427615    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.427639    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.432820    9678 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 09:12:34.432848    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.432859    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.432866    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.432872    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.432878    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.432886    9678 round_trippers.go:580]     Audit-Id: ac13ca40-7114-4a24-920b-13a4b364bfec
	I1107 09:12:34.432893    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.434610    9678 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"coredns-565d847f94-54csh","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"6e280b18-683c-4888-93db-3756e665d1f6","resourceVersion":"1044","creationTimestamp":"2022-11-07T17:07:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"242d6ac3-b661-496c-88f4-ff8c77c9ad21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-07T17:07:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"242d6ac3-b661-496c-88f4-ff8c77c9ad21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84957 chars]
	I1107 09:12:34.437114    9678 system_pods.go:86] 12 kube-system pods found
	I1107 09:12:34.437139    9678 system_pods.go:89] "coredns-565d847f94-54csh" [6e280b18-683c-4888-93db-3756e665d1f6] Running
	I1107 09:12:34.437144    9678 system_pods.go:89] "etcd-multinode-090641" [b5cec8d5-21cb-4a1e-a05a-92b541499e1c] Running
	I1107 09:12:34.437148    9678 system_pods.go:89] "kindnet-5d6kd" [be85ead0-4248-490e-a8fc-2a92f78801f3] Running
	I1107 09:12:34.437152    9678 system_pods.go:89] "kindnet-mgtrp" [e8094b6c-54ad-4f87-aaf3-88dc5155b128] Running
	I1107 09:12:34.437156    9678 system_pods.go:89] "kindnet-nx5lb" [3021a22e-37f1-40d1-9205-1abfb03e58a9] Running
	I1107 09:12:34.437159    9678 system_pods.go:89] "kube-apiserver-multinode-090641" [3ae5af06-6458-4954-a296-a43002732bf4] Running
	I1107 09:12:34.437167    9678 system_pods.go:89] "kube-controller-manager-multinode-090641" [1c2584e6-6b2e-4c67-aea4-7c5568355345] Running
	I1107 09:12:34.437171    9678 system_pods.go:89] "kube-proxy-hxglr" [64e6c03e-e0da-4b75-a1eb-ff55dd0c84ff] Running
	I1107 09:12:34.437175    9678 system_pods.go:89] "kube-proxy-nwck5" [017b9de2-3593-4e50-9493-7d14c0b994ce] Running
	I1107 09:12:34.437178    9678 system_pods.go:89] "kube-proxy-rqnqb" [f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d] Running
	I1107 09:12:34.437183    9678 system_pods.go:89] "kube-scheduler-multinode-090641" [76a48883-135f-49f5-831d-d0182408b2ca] Running
	I1107 09:12:34.437187    9678 system_pods.go:89] "storage-provisioner" [29595449-7701-47e2-af62-0638177bb673] Running
	I1107 09:12:34.437192    9678 system_pods.go:126] duration metric: took 206.884328ms to wait for k8s-apps to be running ...
	I1107 09:12:34.437196    9678 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 09:12:34.437261    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:12:34.446899    9678 system_svc.go:56] duration metric: took 9.697955ms WaitForService to wait for kubelet.
	I1107 09:12:34.446911    9678 kubeadm.go:573] duration metric: took 4.205149164s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 09:12:34.446926    9678 node_conditions.go:102] verifying NodePressure condition ...
	I1107 09:12:34.627437    9678 request.go:614] Waited for 180.434189ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51429/api/v1/nodes
	I1107 09:12:34.627554    9678 round_trippers.go:463] GET https://127.0.0.1:51429/api/v1/nodes
	I1107 09:12:34.627565    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:34.627576    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:34.627587    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:34.632102    9678 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 09:12:34.632114    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:34.632119    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:34.632124    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:34.632128    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:34 GMT
	I1107 09:12:34.632133    9678 round_trippers.go:580]     Audit-Id: 304c67c3-00ed-47aa-892c-5149b3b193f2
	I1107 09:12:34.632138    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:34.632143    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:34.632225    9678 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1082"},"items":[{"metadata":{"name":"multinode-090641","uid":"617f99e2-f422-49d0-a319-73177eda1a24","resourceVersion":"980","creationTimestamp":"2022-11-07T17:07:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-090641","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a8d0d2851e022d93d0c1376f6d2f8095068de262","minikube.k8s.io/name":"multinode-090641","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_07T09_07_07_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 10903 chars]
	I1107 09:12:34.632544    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:34.632554    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:34.632561    9678 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:12:34.632564    9678 node_conditions.go:123] node cpu capacity is 6
	I1107 09:12:34.632568    9678 node_conditions.go:105] duration metric: took 185.633793ms to run NodePressure ...
	I1107 09:12:34.632575    9678 start.go:217] waiting for startup goroutines ...
	I1107 09:12:34.633275    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:34.633342    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:34.655687    9678 out.go:177] * Starting worker node multinode-090641-m02 in cluster multinode-090641
	I1107 09:12:34.677043    9678 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:12:34.698340    9678 out.go:177] * Pulling base image ...
	I1107 09:12:34.720473    9678 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:12:34.720483    9678 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:12:34.720514    9678 cache.go:57] Caching tarball of preloaded images
	I1107 09:12:34.720712    9678 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:12:34.720734    9678 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 09:12:34.721552    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:34.777853    9678 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:12:34.777893    9678 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:12:34.777906    9678 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:12:34.777943    9678 start.go:364] acquiring machines lock for multinode-090641-m02: {Name:mk293de5de179041e4a4997c06a64a8e82b6c39e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:12:34.778031    9678 start.go:368] acquired machines lock for "multinode-090641-m02" in 75.987µs
	I1107 09:12:34.778057    9678 start.go:96] Skipping create...Using existing machine configuration
	I1107 09:12:34.778062    9678 fix.go:55] fixHost starting: m02
	I1107 09:12:34.778350    9678 cli_runner.go:164] Run: docker container inspect multinode-090641-m02 --format={{.State.Status}}
	I1107 09:12:34.834939    9678 fix.go:103] recreateIfNeeded on multinode-090641-m02: state=Stopped err=<nil>
	W1107 09:12:34.834961    9678 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 09:12:34.856856    9678 out.go:177] * Restarting existing docker container for "multinode-090641-m02" ...
	I1107 09:12:34.899732    9678 cli_runner.go:164] Run: docker start multinode-090641-m02
	I1107 09:12:35.240134    9678 cli_runner.go:164] Run: docker container inspect multinode-090641-m02 --format={{.State.Status}}
	I1107 09:12:35.300533    9678 kic.go:415] container "multinode-090641-m02" state is running.
	I1107 09:12:35.301102    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641-m02
	I1107 09:12:35.363745    9678 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/config.json ...
	I1107 09:12:35.364248    9678 machine.go:88] provisioning docker machine ...
	I1107 09:12:35.364267    9678 ubuntu.go:169] provisioning hostname "multinode-090641-m02"
	I1107 09:12:35.364375    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:35.434458    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:35.434629    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:35.434639    9678 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-090641-m02 && echo "multinode-090641-m02" | sudo tee /etc/hostname
	I1107 09:12:35.588762    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-090641-m02
	
	I1107 09:12:35.588857    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:35.652316    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:35.652472    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:35.652485    9678 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-090641-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-090641-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-090641-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:12:35.773766    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:35.773786    9678 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:12:35.773814    9678 ubuntu.go:177] setting up certificates
	I1107 09:12:35.773823    9678 provision.go:83] configureAuth start
	I1107 09:12:35.773922    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641-m02
	I1107 09:12:35.833228    9678 provision.go:138] copyHostCerts
	I1107 09:12:35.833279    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:35.833338    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:12:35.833344    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:12:35.833450    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:12:35.833623    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:35.833663    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:12:35.833667    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:12:35.833792    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:12:35.833933    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:35.833969    9678 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:12:35.833974    9678 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:12:35.834045    9678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:12:35.834172    9678 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.multinode-090641-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-090641-m02]
	I1107 09:12:36.017539    9678 provision.go:172] copyRemoteCerts
	I1107 09:12:36.017604    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:12:36.017669    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.079063    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:36.164012    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 09:12:36.164094    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:12:36.181786    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 09:12:36.181882    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1107 09:12:36.198627    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 09:12:36.198718    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 09:12:36.215475    9678 provision.go:86] duration metric: configureAuth took 441.630066ms
	I1107 09:12:36.215505    9678 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:12:36.215689    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:36.215776    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.273592    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:36.273767    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:36.273778    9678 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:12:36.389713    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:12:36.389731    9678 ubuntu.go:71] root file system type: overlay
	I1107 09:12:36.389892    9678 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:12:36.389978    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.448754    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:36.448913    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:36.448961    9678 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:12:36.574590    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:12:36.574715    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.632733    9678 main.go:134] libmachine: Using SSH client type: native
	I1107 09:12:36.632880    9678 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 51457 <nil> <nil>}
	I1107 09:12:36.632895    9678 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:12:36.754910    9678 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:12:36.754927    9678 machine.go:91] provisioned docker machine in 1.390634499s
	I1107 09:12:36.754934    9678 start.go:300] post-start starting for "multinode-090641-m02" (driver="docker")
	I1107 09:12:36.754940    9678 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:12:36.755014    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:12:36.755077    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.812191    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:36.898167    9678 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:12:36.901449    9678 command_runner.go:130] > NAME="Ubuntu"
	I1107 09:12:36.901460    9678 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1107 09:12:36.901464    9678 command_runner.go:130] > ID=ubuntu
	I1107 09:12:36.901470    9678 command_runner.go:130] > ID_LIKE=debian
	I1107 09:12:36.901477    9678 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1107 09:12:36.901482    9678 command_runner.go:130] > VERSION_ID="20.04"
	I1107 09:12:36.901489    9678 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 09:12:36.901497    9678 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 09:12:36.901502    9678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 09:12:36.901518    9678 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 09:12:36.901523    9678 command_runner.go:130] > VERSION_CODENAME=focal
	I1107 09:12:36.901529    9678 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1107 09:12:36.901579    9678 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:12:36.901589    9678 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:12:36.901599    9678 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:12:36.901604    9678 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:12:36.901610    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:12:36.901698    9678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:12:36.901860    9678 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:12:36.901868    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /etc/ssl/certs/32672.pem
	I1107 09:12:36.902053    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:12:36.910520    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:36.928012    9678 start.go:303] post-start completed in 173.063441ms
	I1107 09:12:36.928092    9678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:12:36.928153    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:36.986235    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:37.069251    9678 command_runner.go:130] > 6%!
	(MISSING)I1107 09:12:37.069341    9678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:12:37.073871    9678 command_runner.go:130] > 92G
	I1107 09:12:37.094141    9678 fix.go:57] fixHost completed within 2.316013742s
	I1107 09:12:37.094163    9678 start.go:83] releasing machines lock for "multinode-090641-m02", held for 2.316061544s
	I1107 09:12:37.094352    9678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641-m02
	I1107 09:12:37.174178    9678 out.go:177] * Found network options:
	I1107 09:12:37.195455    9678 out.go:177]   - NO_PROXY=192.168.58.2
	W1107 09:12:37.217090    9678 proxy.go:119] fail to check proxy env: Error ip not in block
	W1107 09:12:37.217155    9678 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 09:12:37.217384    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 09:12:37.217393    9678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 09:12:37.217520    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:37.217541    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:12:37.279387    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:37.279533    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51457 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:12:37.418029    9678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 09:12:37.419951    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1107 09:12:37.432516    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:37.506573    9678 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 09:12:37.605540    9678 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:12:37.616361    9678 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1107 09:12:37.616393    9678 command_runner.go:130] > [Unit]
	I1107 09:12:37.616403    9678 command_runner.go:130] > Description=Docker Application Container Engine
	I1107 09:12:37.616409    9678 command_runner.go:130] > Documentation=https://docs.docker.com
	I1107 09:12:37.616413    9678 command_runner.go:130] > BindsTo=containerd.service
	I1107 09:12:37.616418    9678 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1107 09:12:37.616423    9678 command_runner.go:130] > Wants=network-online.target
	I1107 09:12:37.616427    9678 command_runner.go:130] > Requires=docker.socket
	I1107 09:12:37.616432    9678 command_runner.go:130] > StartLimitBurst=3
	I1107 09:12:37.616436    9678 command_runner.go:130] > StartLimitIntervalSec=60
	I1107 09:12:37.616440    9678 command_runner.go:130] > [Service]
	I1107 09:12:37.616443    9678 command_runner.go:130] > Type=notify
	I1107 09:12:37.616447    9678 command_runner.go:130] > Restart=on-failure
	I1107 09:12:37.616451    9678 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1107 09:12:37.616456    9678 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1107 09:12:37.616463    9678 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1107 09:12:37.616468    9678 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1107 09:12:37.616474    9678 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1107 09:12:37.616481    9678 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1107 09:12:37.616486    9678 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1107 09:12:37.616496    9678 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1107 09:12:37.616506    9678 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1107 09:12:37.616515    9678 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1107 09:12:37.616521    9678 command_runner.go:130] > ExecStart=
	I1107 09:12:37.616532    9678 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1107 09:12:37.616536    9678 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1107 09:12:37.616544    9678 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1107 09:12:37.616549    9678 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1107 09:12:37.616553    9678 command_runner.go:130] > LimitNOFILE=infinity
	I1107 09:12:37.616556    9678 command_runner.go:130] > LimitNPROC=infinity
	I1107 09:12:37.616560    9678 command_runner.go:130] > LimitCORE=infinity
	I1107 09:12:37.616565    9678 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1107 09:12:37.616569    9678 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1107 09:12:37.616576    9678 command_runner.go:130] > TasksMax=infinity
	I1107 09:12:37.616579    9678 command_runner.go:130] > TimeoutStartSec=0
	I1107 09:12:37.616584    9678 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1107 09:12:37.616588    9678 command_runner.go:130] > Delegate=yes
	I1107 09:12:37.616598    9678 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1107 09:12:37.616602    9678 command_runner.go:130] > KillMode=process
	I1107 09:12:37.616605    9678 command_runner.go:130] > [Install]
	I1107 09:12:37.616609    9678 command_runner.go:130] > WantedBy=multi-user.target
	I1107 09:12:37.617203    9678 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:12:37.617270    9678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:12:37.626466    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:12:37.638117    9678 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:37.638129    9678 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1107 09:12:37.639036    9678 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:12:37.712086    9678 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:12:37.782059    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:37.850049    9678 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:12:38.082578    9678 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 09:12:38.147577    9678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:12:38.216906    9678 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 09:12:38.226468    9678 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 09:12:38.226555    9678 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 09:12:38.230230    9678 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1107 09:12:38.230240    9678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 09:12:38.230249    9678 command_runner.go:130] > Device: 100036h/1048630d	Inode: 130         Links: 1
	I1107 09:12:38.230255    9678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1107 09:12:38.230262    9678 command_runner.go:130] > Access: 2022-11-07 17:12:38.061131969 +0000
	I1107 09:12:38.230266    9678 command_runner.go:130] > Modify: 2022-11-07 17:12:37.527132006 +0000
	I1107 09:12:38.230271    9678 command_runner.go:130] > Change: 2022-11-07 17:12:37.542132005 +0000
	I1107 09:12:38.230275    9678 command_runner.go:130] >  Birth: -
	I1107 09:12:38.230286    9678 start.go:472] Will wait 60s for crictl version
	I1107 09:12:38.230335    9678 ssh_runner.go:195] Run: sudo crictl version
	I1107 09:12:38.256676    9678 command_runner.go:130] > Version:  0.1.0
	I1107 09:12:38.256687    9678 command_runner.go:130] > RuntimeName:  docker
	I1107 09:12:38.256697    9678 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1107 09:12:38.256710    9678 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1107 09:12:38.258513    9678 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 09:12:38.258605    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:38.284676    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:38.286659    9678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:12:38.312548    9678 command_runner.go:130] > 20.10.20
	I1107 09:12:38.359207    9678 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 09:12:38.381005    9678 out.go:177]   - env NO_PROXY=192.168.58.2
	I1107 09:12:38.402202    9678 cli_runner.go:164] Run: docker exec -t multinode-090641-m02 dig +short host.docker.internal
	I1107 09:12:38.521784    9678 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:12:38.521888    9678 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:12:38.526217    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:38.535736    9678 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641 for IP: 192.168.58.3
	I1107 09:12:38.535853    9678 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:12:38.535907    9678 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:12:38.535915    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 09:12:38.535948    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 09:12:38.535973    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 09:12:38.535998    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 09:12:38.536086    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:12:38.536138    9678 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:12:38.536153    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:12:38.536193    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:12:38.536232    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:12:38.536264    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:12:38.536342    9678 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:12:38.536383    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.536413    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.536434    9678 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem -> /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.536767    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:12:38.554401    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:12:38.571586    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:12:38.589897    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:12:38.607380    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:12:38.623922    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:12:38.641539    9678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:12:38.659050    9678 ssh_runner.go:195] Run: openssl version
	I1107 09:12:38.664357    9678 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1107 09:12:38.664777    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:12:38.672962    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.676851    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.676940    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.677002    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:12:38.682120    9678 command_runner.go:130] > b5213941
	I1107 09:12:38.682452    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:12:38.690013    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:12:38.698443    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.702204    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.702334    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.702384    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:12:38.707394    9678 command_runner.go:130] > 51391683
	I1107 09:12:38.707749    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:12:38.715087    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:12:38.722738    9678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.726303    9678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.726395    9678 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.726443    9678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:12:38.731207    9678 command_runner.go:130] > 3ec20f2e
	I1107 09:12:38.731456    9678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:12:38.738803    9678 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:12:38.805762    9678 command_runner.go:130] > systemd
	I1107 09:12:38.808063    9678 cni.go:95] Creating CNI manager for ""
	I1107 09:12:38.808074    9678 cni.go:156] 2 nodes found, recommending kindnet
	I1107 09:12:38.808085    9678 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:12:38.808105    9678 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-090641 NodeName:multinode-090641-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:12:38.808187    9678 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-090641-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:12:38.808258    9678 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-090641-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:12:38.808332    9678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 09:12:38.815160    9678 command_runner.go:130] > kubeadm
	I1107 09:12:38.815168    9678 command_runner.go:130] > kubectl
	I1107 09:12:38.815171    9678 command_runner.go:130] > kubelet
	I1107 09:12:38.815923    9678 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:12:38.815985    9678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1107 09:12:38.822735    9678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I1107 09:12:38.835647    9678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:12:38.848281    9678 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:12:38.851948    9678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:12:38.861329    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:38.861511    9678 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:38.861518    9678 start.go:286] JoinCluster: &{Name:multinode-090641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-090641 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:12:38.861608    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1107 09:12:38.861673    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:38.919647    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:39.053317    9678 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f 
	I1107 09:12:39.053349    9678 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:39.053369    9678 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:12:39.053609    9678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-090641-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1107 09:12:39.053674    9678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:12:39.111894    9678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51425 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:12:39.237803    9678 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1107 09:12:39.268588    9678 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-5d6kd, kube-system/kube-proxy-hxglr
	I1107 09:12:42.278699    9678 command_runner.go:130] > node/multinode-090641-m02 cordoned
	I1107 09:12:42.278716    9678 command_runner.go:130] > pod "busybox-65db55d5d6-gvc9j" has DeletionTimestamp older than 1 seconds, skipping
	I1107 09:12:42.278721    9678 command_runner.go:130] > node/multinode-090641-m02 drained
	I1107 09:12:42.278740    9678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-090641-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.225031924s)
	I1107 09:12:42.278750    9678 node.go:109] successfully drained node "m02"
	I1107 09:12:42.279107    9678 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:12:42.279321    9678 kapi.go:59] client config for multinode-090641: &rest.Config{Host:"https://127.0.0.1:51429", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/multinode-090641/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:12:42.279569    9678 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1107 09:12:42.279600    9678 round_trippers.go:463] DELETE https://127.0.0.1:51429/api/v1/nodes/multinode-090641-m02
	I1107 09:12:42.279605    9678 round_trippers.go:469] Request Headers:
	I1107 09:12:42.279611    9678 round_trippers.go:473]     Accept: application/json, */*
	I1107 09:12:42.279617    9678 round_trippers.go:473]     Content-Type: application/json
	I1107 09:12:42.279622    9678 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1107 09:12:42.283069    9678 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 09:12:42.283080    9678 round_trippers.go:577] Response Headers:
	I1107 09:12:42.283086    9678 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5bc67d1-8e5a-49a2-a237-45122347a3ca
	I1107 09:12:42.283091    9678 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e20b5b9a-3aac-4f88-b458-fda3096dde4d
	I1107 09:12:42.283099    9678 round_trippers.go:580]     Content-Length: 171
	I1107 09:12:42.283104    9678 round_trippers.go:580]     Date: Mon, 07 Nov 2022 17:12:42 GMT
	I1107 09:12:42.283112    9678 round_trippers.go:580]     Audit-Id: 4b919fdd-8370-45bb-8e46-98d354ba8e74
	I1107 09:12:42.283117    9678 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 09:12:42.283122    9678 round_trippers.go:580]     Content-Type: application/json
	I1107 09:12:42.283136    9678 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-090641-m02","kind":"nodes","uid":"1d15109e-f0b8-4c7f-a0d6-c4e58cde1a91"}}
	I1107 09:12:42.283163    9678 node.go:125] successfully deleted node "m02"
	I1107 09:12:42.283170    9678 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:42.283182    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:42.283192    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:12:42.321330    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:12:42.432078    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:12:42.432093    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:12:42.452544    9678 command_runner.go:130] ! W1107 17:12:42.332044    1095 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:42.452557    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:12:42.452570    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:12:42.452578    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:12:42.452583    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:12:42.452590    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:12:42.452605    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:12:42.452611    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:12:42.452650    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:42.332044    1095 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:42.452663    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:12:42.452671    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:12:42.489820    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:12:42.489840    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:42.489860    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:42.489880    9678 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:42.332044    1095 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.537148    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:12:53.537312    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:12:53.576193    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:12:53.675405    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:12:53.675423    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:12:53.693584    9678 command_runner.go:130] ! W1107 17:12:53.575913    1771 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:12:53.693605    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:12:53.693614    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:12:53.693620    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:12:53.693627    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:12:53.693633    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:12:53.693642    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:12:53.693647    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:12:53.693676    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:53.575913    1771 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.693685    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:12:53.693692    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:12:53.731248    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:12:53.731269    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.731291    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:12:53.731302    9678 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:12:53.575913    1771 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.341284    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:13:15.341342    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:13:15.378198    9678 command_runner.go:130] ! W1107 17:13:15.393134    2010 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:13:15.378214    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:13:15.400742    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:13:15.405187    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:13:15.465066    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:13:15.465080    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:13:15.491421    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:13:15.491444    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.494621    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:13:15.494636    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:13:15.494643    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1107 09:13:15.494673    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:15.393134    2010 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.494682    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:13:15.494691    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:13:15.530806    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:13:15.530821    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.530835    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:15.530846    9678 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:15.393134    2010 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.734157    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:13:41.734233    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:13:41.767927    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:13:41.869550    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:13:41.869564    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:13:41.888428    9678 command_runner.go:130] ! W1107 17:13:41.780315    2274 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:13:41.888441    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:13:41.888452    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:13:41.888457    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:13:41.888462    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:13:41.888469    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:13:41.888478    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:13:41.888484    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:13:41.888510    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:41.780315    2274 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.888518    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:13:41.888526    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:13:41.923298    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:13:41.923314    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.926330    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:13:41.926346    9678 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:13:41.780315    2274 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.577189    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:14:13.577283    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:14:13.614416    9678 command_runner.go:130] ! W1107 17:14:13.628078    2604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:14:13.614527    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:14:13.637583    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:14:13.642787    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:14:13.706851    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:14:13.706864    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:14:13.732790    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:14:13.732803    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.735842    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:14:13.735853    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:14:13.735860    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1107 09:14:13.735888    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:14:13.628078    2604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.735906    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:14:13.735922    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:14:13.770217    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:14:13.770230    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.772704    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:14:13.772723    9678 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:14:13.628078    2604 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.585276    9678 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1107 09:15:00.585339    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02"
	I1107 09:15:00.624218    9678 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 09:15:00.727643    9678 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 09:15:00.727680    9678 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 09:15:00.744682    9678 command_runner.go:130] ! W1107 17:15:00.628744    3040 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 09:15:00.744697    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 09:15:00.744708    9678 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1107 09:15:00.744713    9678 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1107 09:15:00.744718    9678 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 09:15:00.744737    9678 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 09:15:00.744750    9678 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1107 09:15:00.744756    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1107 09:15:00.744793    9678 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:15:00.628744    3040 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.744801    9678 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1107 09:15:00.744810    9678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1107 09:15:00.786274    9678 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1107 09:15:00.786289    9678 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.786308    9678 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1107 09:15:00.786324    9678 start.go:288] JoinCluster complete in 2m21.921226011s
	I1107 09:15:00.808159    9678 out.go:177] 
	W1107 09:15:00.829126    9678 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dl06rz.3hhhxqmrjz2rm1zy --discovery-token-ca-cert-hash sha256:3ff1a4dd3f3b08e47b3495b9a646d345f6ef38dbabac0540012121a1bc6bd33f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-090641-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1107 17:15:00.628744    3040 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-090641-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:15:00.829157    9678 out.go:239] * 
	W1107 09:15:00.829806    9678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:15:00.892201    9678 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:12:03 UTC, end at Mon 2022-11-07 17:15:02 UTC. --
	Nov 07 17:12:05 multinode-090641 dockerd[132]: time="2022-11-07T17:12:05.866791546Z" level=info msg="Daemon shutdown complete"
	Nov 07 17:12:05 multinode-090641 dockerd[132]: time="2022-11-07T17:12:05.866833045Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 07 17:12:05 multinode-090641 systemd[1]: docker.service: Succeeded.
	Nov 07 17:12:05 multinode-090641 systemd[1]: Stopped Docker Application Container Engine.
	Nov 07 17:12:05 multinode-090641 systemd[1]: docker.service: Consumed 1.151s CPU time.
	Nov 07 17:12:05 multinode-090641 systemd[1]: Starting Docker Application Container Engine...
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.910773196Z" level=info msg="Starting up"
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.912538990Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.912573040Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.912589166Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.912596815Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.913770543Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.913907574Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.913962645Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.914001602Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.917130605Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 07 17:12:05 multinode-090641 dockerd[638]: time="2022-11-07T17:12:05.922475538Z" level=info msg="Loading containers: start."
	Nov 07 17:12:06 multinode-090641 dockerd[638]: time="2022-11-07T17:12:06.047467802Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:12:06 multinode-090641 dockerd[638]: time="2022-11-07T17:12:06.088416029Z" level=info msg="Loading containers: done."
	Nov 07 17:12:06 multinode-090641 dockerd[638]: time="2022-11-07T17:12:06.099229231Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:12:06 multinode-090641 dockerd[638]: time="2022-11-07T17:12:06.099301399Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:12:06 multinode-090641 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:12:06 multinode-090641 dockerd[638]: time="2022-11-07T17:12:06.124315782Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:12:06 multinode-090641 dockerd[638]: time="2022-11-07T17:12:06.126658365Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 17:12:48 multinode-090641 dockerd[638]: time="2022-11-07T17:12:48.135407569Z" level=info msg="ignoring event" container=05760ea85057cc3db9f498e8b175a63457660be7c052477341edba54c7f2c887 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a6073532f4cf5       6e38f40d628db       2 minutes ago       Running             storage-provisioner       3                   4d74ff4efd4cf
	cef86b3cefbba       5185b96f0becf       2 minutes ago       Running             coredns                   2                   65214accbb730
	b883a1c86645a       8c811b4aec35f       2 minutes ago       Running             busybox                   2                   8827a35b20e4a
	de5efc17615d2       d6e3e26021b60       2 minutes ago       Running             kindnet-cni               2                   5d560974a12bf
	c04be752c4eb3       beaaf00edd38a       2 minutes ago       Running             kube-proxy                2                   875adb35fac68
	05760ea85057c       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       2                   4d74ff4efd4cf
	22b142a216a86       6039992312758       2 minutes ago       Running             kube-controller-manager   2                   db280c4aa5a00
	be29558276a2f       6d23ec0e8b87e       2 minutes ago       Running             kube-scheduler            2                   8c354f9295dbe
	e21bed8e2ddc3       0346dbd74bcb9       2 minutes ago       Running             kube-apiserver            2                   75f337c522a2e
	5e0ab1338773e       a8a176a5d5d69       2 minutes ago       Running             etcd                      2                   fd4522e1a15f9
	3bf8f38c0aaf4       d6e3e26021b60       4 minutes ago       Exited              kindnet-cni               1                   627de0fec15d6
	493618898f74d       8c811b4aec35f       4 minutes ago       Exited              busybox                   1                   f0180480d1ea3
	bea8197d27a30       5185b96f0becf       4 minutes ago       Exited              coredns                   1                   99867f7318f33
	e521a3a86451e       beaaf00edd38a       4 minutes ago       Exited              kube-proxy                1                   2141f6b1f9b03
	5d163fc295ef7       6039992312758       4 minutes ago       Exited              kube-controller-manager   1                   35468c6f48085
	6532ace61a778       6d23ec0e8b87e       4 minutes ago       Exited              kube-scheduler            1                   1244b6d56687f
	309af6aa7d073       a8a176a5d5d69       4 minutes ago       Exited              etcd                      1                   8c41e71be6329
	99e10e3b23a18       0346dbd74bcb9       4 minutes ago       Exited              kube-apiserver            1                   44f3aabcd4fbe
	
	* 
	* ==> coredns [bea8197d27a3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [cef86b3cefbb] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-090641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-090641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
	                    minikube.k8s.io/name=multinode-090641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_07T09_07_07_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Nov 2022 17:07:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-090641
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Nov 2022 17:14:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Nov 2022 17:12:16 +0000   Mon, 07 Nov 2022 17:07:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Nov 2022 17:12:16 +0000   Mon, 07 Nov 2022 17:07:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Nov 2022 17:12:16 +0000   Mon, 07 Nov 2022 17:07:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Nov 2022 17:12:16 +0000   Mon, 07 Nov 2022 17:07:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-090641
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                e6c6188b-543c-44ba-ac9f-28dfff8d3fd6
	  Boot ID:                    d6bec1af-42e2-498c-8176-8915b52b45fe
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-4wm8f                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                 coredns-565d847f94-54csh                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     7m43s
	  kube-system                 etcd-multinode-090641                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m55s
	  kube-system                 kindnet-mgtrp                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m43s
	  kube-system                 kube-apiserver-multinode-090641             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-controller-manager-multinode-090641    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-proxy-rqnqb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 kube-scheduler-multinode-090641             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m41s                  kube-proxy       
	  Normal  Starting                 2m44s                  kube-proxy       
	  Normal  Starting                 4m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m55s                  kubelet          Node multinode-090641 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m55s                  kubelet          Node multinode-090641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m55s                  kubelet          Node multinode-090641 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m55s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m43s                  node-controller  Node multinode-090641 event: Registered Node multinode-090641 in Controller
	  Normal  NodeReady                7m35s                  kubelet          Node multinode-090641 status is now: NodeReady
	  Normal  Starting                 4m50s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m50s)  kubelet          Node multinode-090641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m50s)  kubelet          Node multinode-090641 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m50s)  kubelet          Node multinode-090641 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           4m33s                  node-controller  Node multinode-090641 event: Registered Node multinode-090641 in Controller
	  Normal  Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m51s (x9 over 2m51s)  kubelet          Node multinode-090641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x7 over 2m51s)  kubelet          Node multinode-090641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x7 over 2m51s)  kubelet          Node multinode-090641 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m33s                  node-controller  Node multinode-090641 event: Registered Node multinode-090641 in Controller
	
	
	Name:               multinode-090641-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-090641-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Nov 2022 17:12:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-090641-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Nov 2022 17:14:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Nov 2022 17:12:42 +0000   Mon, 07 Nov 2022 17:12:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Nov 2022 17:12:42 +0000   Mon, 07 Nov 2022 17:12:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Nov 2022 17:12:42 +0000   Mon, 07 Nov 2022 17:12:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Nov 2022 17:12:42 +0000   Mon, 07 Nov 2022 17:12:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-090641-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                efad4a85-7d7d-4667-913c-a4b2eeefb909
	  Boot ID:                    d6bec1af-42e2-498c-8176-8915b52b45fe
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-sznj9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kindnet-5d6kd               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m19s
	  kube-system                 kube-proxy-hxglr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m14s                  kube-proxy  
	  Normal  Starting                 2m17s                  kube-proxy  
	  Normal  Starting                 4m13s                  kube-proxy  
	  Normal  Starting                 7m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m19s (x2 over 7m19s)  kubelet     Node multinode-090641-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x2 over 7m19s)  kubelet     Node multinode-090641-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s (x2 over 7m19s)  kubelet     Node multinode-090641-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m58s                  kubelet     Node multinode-090641-m02 status is now: NodeReady
	  Normal  NodeHasSufficientPID     4m16s (x2 over 4m16s)  kubelet     Node multinode-090641-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m16s (x2 over 4m16s)  kubelet     Node multinode-090641-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m16s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m16s (x2 over 4m16s)  kubelet     Node multinode-090641-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m16s                  kubelet     Starting kubelet.
	  Normal  NodeReady                4m5s                   kubelet     Node multinode-090641-m02 status is now: NodeReady
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m20s (x7 over 2m27s)  kubelet     Node multinode-090641-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x7 over 2m27s)  kubelet     Node multinode-090641-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m27s)  kubelet     Node multinode-090641-m02 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.001714] FS-Cache: O-key=[8] '488dae0300000000'
	[  +0.001157] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.001539] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=00000000e1a1a6d0
	[  +0.001710] FS-Cache: N-key=[8] '488dae0300000000'
	[  +0.002207] FS-Cache: Duplicate cookie detected
	[  +0.001068] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001559] FS-Cache: O-cookie d=00000000966442c4{9p.inode} n=000000005ed6919f
	[  +0.001709] FS-Cache: O-key=[8] '488dae0300000000'
	[  +0.001169] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.001534] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=00000000b2ce95cf
	[  +0.001708] FS-Cache: N-key=[8] '488dae0300000000'
	[  +3.616838] FS-Cache: Duplicate cookie detected
	[  +0.001061] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001566] FS-Cache: O-cookie d=00000000966442c4{9p.inode} n=0000000097b2e942
	[  +0.001705] FS-Cache: O-key=[8] '478dae0300000000'
	[  +0.001158] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001518] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=00000000bf315956
	[  +0.001676] FS-Cache: N-key=[8] '478dae0300000000'
	[  +0.397688] FS-Cache: Duplicate cookie detected
	[  +0.001069] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.001560] FS-Cache: O-cookie d=00000000966442c4{9p.inode} n=0000000057a1aadc
	[  +0.001702] FS-Cache: O-key=[8] '648dae0300000000'
	[  +0.001156] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001498] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=0000000083dcca6d
	[  +0.001683] FS-Cache: N-key=[8] '648dae0300000000'
	
	* 
	* ==> etcd [309af6aa7d07] <==
	* {"level":"info","ts":"2022-11-07T17:10:13.914Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-07T17:10:13.914Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-07T17:10:13.914Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-07T17:10:15.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-11-07T17:10:15.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-11-07T17:10:15.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-11-07T17:10:15.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-11-07T17:10:15.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-11-07T17:10:15.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-11-07T17:10:15.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-11-07T17:10:15.702Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-090641 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-07T17:10:15.702Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:10:15.702Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:10:15.703Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-07T17:10:15.703Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-07T17:10:15.703Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-11-07T17:10:15.703Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-11-07T17:11:37.441Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-07T17:11:37.441Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-090641","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/11/07 17:11:37 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/07 17:11:37 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-07T17:11:37.452Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-11-07T17:11:37.458Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-07T17:11:37.460Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-07T17:11:37.460Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-090641","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [5e0ab1338773] <==
	* {"level":"info","ts":"2022-11-07T17:12:13.109Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-11-07T17:12:13.109Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-11-07T17:12:13.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-11-07T17:12:13.110Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-11-07T17:12:13.110Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:12:13.110Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:12:13.111Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-11-07T17:12:13.111Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-07T17:12:13.112Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-07T17:12:13.112Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-07T17:12:13.112Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-07T17:12:14.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 3"}
	{"level":"info","ts":"2022-11-07T17:12:14.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-11-07T17:12:14.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-11-07T17:12:14.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 4"}
	{"level":"info","ts":"2022-11-07T17:12:14.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-11-07T17:12:14.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 4"}
	{"level":"info","ts":"2022-11-07T17:12:14.999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-11-07T17:12:15.002Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-090641 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-07T17:12:15.002Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:12:15.002Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:12:15.002Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-07T17:12:15.002Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-07T17:12:15.004Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-11-07T17:12:15.004Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  17:15:03 up 44 min,  0 users,  load average: 0.21, 0.51, 0.53
	Linux multinode-090641 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [99e10e3b23a1] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 17:11:37.446722       1 logging.go:59] [core] [Channel #16 SubChannel #17] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 17:11:37.447156       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1107 17:11:37.447549       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [e21bed8e2ddc] <==
	* I1107 17:12:16.557670       1 establishing_controller.go:76] Starting EstablishingController
	I1107 17:12:16.557767       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1107 17:12:16.557888       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1107 17:12:16.558141       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1107 17:12:16.544356       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1107 17:12:16.544347       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:12:16.544744       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1107 17:12:16.568866       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1107 17:12:16.618587       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1107 17:12:16.644859       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1107 17:12:16.645204       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1107 17:12:16.645365       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 17:12:16.645558       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1107 17:12:16.658198       1 cache.go:39] Caches are synced for autoregister controller
	I1107 17:12:16.669190       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 17:12:16.693452       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 17:12:17.373265       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 17:12:17.548294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 17:12:19.038141       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1107 17:12:19.306061       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1107 17:12:19.313394       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1107 17:12:19.399515       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 17:12:19.405327       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 17:12:29.389993       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 17:12:29.588514       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [22b142a216a8] <==
	* I1107 17:12:29.335704       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 17:12:29.386776       1 shared_informer.go:262] Caches are synced for ephemeral
	I1107 17:12:29.390986       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:12:29.404485       1 shared_informer.go:262] Caches are synced for disruption
	I1107 17:12:29.422196       1 shared_informer.go:262] Caches are synced for taint
	I1107 17:12:29.422438       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1107 17:12:29.422609       1 event.go:294] "Event occurred" object="multinode-090641" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-090641 event: Registered Node multinode-090641 in Controller"
	I1107 17:12:29.422640       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1107 17:12:29.422646       1 event.go:294] "Event occurred" object="multinode-090641-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-090641-m02 event: Registered Node multinode-090641-m02 in Controller"
	I1107 17:12:29.422666       1 taint_manager.go:209] "Sending events to api server"
	W1107 17:12:29.423007       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-090641. Assuming now as a timestamp.
	W1107 17:12:29.423048       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-090641-m02. Assuming now as a timestamp.
	I1107 17:12:29.423062       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 17:12:29.442161       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:12:29.797193       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:12:29.797224       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 17:12:29.805728       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:12:39.292768       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-sznj9"
	W1107 17:12:42.332703       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m02 node
	W1107 17:12:42.333912       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-090641-m02" does not exist
	I1107 17:12:42.336998       1 range_allocator.go:367] Set node multinode-090641-m02 PodCIDR to [10.244.1.0/24]
	I1107 17:13:09.192066       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-nwck5"
	I1107 17:13:09.195834       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-nwck5"
	I1107 17:13:09.195971       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-nx5lb"
	I1107 17:13:09.199979       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-nx5lb"
	
	* 
	* ==> kube-controller-manager [5d163fc295ef] <==
	* I1107 17:10:30.027264       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:10:30.030921       1 shared_informer.go:262] Caches are synced for crt configmap
	I1107 17:10:30.049709       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1107 17:10:30.050201       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1107 17:10:30.091259       1 shared_informer.go:262] Caches are synced for disruption
	I1107 17:10:30.099852       1 shared_informer.go:262] Caches are synced for deployment
	I1107 17:10:30.103947       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:10:30.415077       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:10:30.474825       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:10:30.474887       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 17:10:43.141162       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-24zbs"
	W1107 17:10:46.145973       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m03 node
	W1107 17:10:46.921432       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m03 node
	W1107 17:10:46.921539       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-090641-m02" does not exist
	I1107 17:10:46.926107       1 range_allocator.go:367] Set node multinode-090641-m02 PodCIDR to [10.244.1.0/24]
	W1107 17:10:57.015504       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m02 node
	I1107 17:11:04.465245       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-gvc9j"
	W1107 17:11:07.477503       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m02 node
	W1107 17:11:08.012107       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-090641-m03" does not exist
	W1107 17:11:08.012162       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m02 node
	I1107 17:11:08.018469       1 range_allocator.go:367] Set node multinode-090641-m03 PodCIDR to [10.244.2.0/24]
	I1107 17:11:10.415100       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-24zbs" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-24zbs"
	W1107 17:11:18.166255       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m02 node
	W1107 17:11:20.847650       1 topologycache.go:199] Can't get CPU or zone information for multinode-090641-m02 node
	I1107 17:11:24.833441       1 event.go:294] "Event occurred" object="multinode-090641-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-090641-m03 event: Removing Node multinode-090641-m03 from Controller"
	
	* 
	* ==> kube-proxy [c04be752c4eb] <==
	* I1107 17:12:18.531812       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1107 17:12:18.531945       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1107 17:12:18.532029       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 17:12:18.557894       1 server_others.go:206] "Using iptables Proxier"
	I1107 17:12:18.557945       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 17:12:18.557955       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 17:12:18.557997       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 17:12:18.558044       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:12:18.558293       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:12:18.597717       1 server.go:661] "Version info" version="v1.25.3"
	I1107 17:12:18.597836       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:12:18.600334       1 config.go:317] "Starting service config controller"
	I1107 17:12:18.600375       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 17:12:18.600484       1 config.go:226] "Starting endpoint slice config controller"
	I1107 17:12:18.600494       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 17:12:18.600502       1 config.go:444] "Starting node config controller"
	I1107 17:12:18.600580       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 17:12:18.700768       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 17:12:18.700911       1 shared_informer.go:262] Caches are synced for node config
	I1107 17:12:18.700924       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [e521a3a86451] <==
	* I1107 17:10:18.824816       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1107 17:10:18.825067       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1107 17:10:18.825170       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 17:10:18.846096       1 server_others.go:206] "Using iptables Proxier"
	I1107 17:10:18.846162       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 17:10:18.846207       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 17:10:18.846222       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 17:10:18.846245       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:10:18.846519       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:10:18.846813       1 server.go:661] "Version info" version="v1.25.3"
	I1107 17:10:18.846859       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:10:18.849093       1 config.go:226] "Starting endpoint slice config controller"
	I1107 17:10:18.849122       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 17:10:18.849220       1 config.go:444] "Starting node config controller"
	I1107 17:10:18.849226       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 17:10:18.849103       1 config.go:317] "Starting service config controller"
	I1107 17:10:18.849722       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 17:10:18.949273       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 17:10:18.949357       1 shared_informer.go:262] Caches are synced for node config
	I1107 17:10:18.950103       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [6532ace61a77] <==
	* I1107 17:10:14.698486       1 serving.go:348] Generated self-signed cert in-memory
	W1107 17:10:17.226507       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 17:10:17.226529       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 17:10:17.226540       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 17:10:17.226546       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 17:10:17.236573       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 17:10:17.236608       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:10:17.237594       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 17:10:17.237642       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:10:17.237603       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 17:10:17.238158       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:10:17.338847       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:11:37.449962       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1107 17:11:37.450052       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1107 17:11:37.450333       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1107 17:11:37.450892       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E1107 17:11:37.453671       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [be29558276a2] <==
	* I1107 17:12:13.735103       1 serving.go:348] Generated self-signed cert in-memory
	W1107 17:12:16.574256       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 17:12:16.574297       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 17:12:16.574306       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 17:12:16.574311       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 17:12:16.604206       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 17:12:16.604303       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:12:16.605582       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 17:12:16.605786       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 17:12:16.605854       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:12:16.606082       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:12:16.706103       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:12:03 UTC, end at Mon 2022-11-07 17:15:04 UTC. --
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.298472    1183 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.298502    1183 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.298525    1183 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.451828    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e280b18-683c-4888-93db-3756e665d1f6-config-volume\") pod \"coredns-565d847f94-54csh\" (UID: \"6e280b18-683c-4888-93db-3756e665d1f6\") " pod="kube-system/coredns-565d847f94-54csh"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452084    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6wt\" (UniqueName: \"kubernetes.io/projected/e8094b6c-54ad-4f87-aaf3-88dc5155b128-kube-api-access-fr6wt\") pod \"kindnet-mgtrp\" (UID: \"e8094b6c-54ad-4f87-aaf3-88dc5155b128\") " pod="kube-system/kindnet-mgtrp"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452139    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9btrt\" (UniqueName: \"kubernetes.io/projected/f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d-kube-api-access-9btrt\") pod \"kube-proxy-rqnqb\" (UID: \"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d\") " pod="kube-system/kube-proxy-rqnqb"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452170    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/29595449-7701-47e2-af62-0638177bb673-tmp\") pod \"storage-provisioner\" (UID: \"29595449-7701-47e2-af62-0638177bb673\") " pod="kube-system/storage-provisioner"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452196    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8094b6c-54ad-4f87-aaf3-88dc5155b128-lib-modules\") pod \"kindnet-mgtrp\" (UID: \"e8094b6c-54ad-4f87-aaf3-88dc5155b128\") " pod="kube-system/kindnet-mgtrp"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452219    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d-xtables-lock\") pod \"kube-proxy-rqnqb\" (UID: \"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d\") " pod="kube-system/kube-proxy-rqnqb"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452244    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d-lib-modules\") pod \"kube-proxy-rqnqb\" (UID: \"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d\") " pod="kube-system/kube-proxy-rqnqb"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452267    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8094b6c-54ad-4f87-aaf3-88dc5155b128-xtables-lock\") pod \"kindnet-mgtrp\" (UID: \"e8094b6c-54ad-4f87-aaf3-88dc5155b128\") " pod="kube-system/kindnet-mgtrp"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452342    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcp4\" (UniqueName: \"kubernetes.io/projected/6e280b18-683c-4888-93db-3756e665d1f6-kube-api-access-dbcp4\") pod \"coredns-565d847f94-54csh\" (UID: \"6e280b18-683c-4888-93db-3756e665d1f6\") " pod="kube-system/coredns-565d847f94-54csh"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452467    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d-kube-proxy\") pod \"kube-proxy-rqnqb\" (UID: \"f941c40c-9694-47ef-bf6d-7e3d6ecf7a2d\") " pod="kube-system/kube-proxy-rqnqb"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452500    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e8094b6c-54ad-4f87-aaf3-88dc5155b128-cni-cfg\") pod \"kindnet-mgtrp\" (UID: \"e8094b6c-54ad-4f87-aaf3-88dc5155b128\") " pod="kube-system/kindnet-mgtrp"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452563    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgdvw\" (UniqueName: \"kubernetes.io/projected/29595449-7701-47e2-af62-0638177bb673-kube-api-access-pgdvw\") pod \"storage-provisioner\" (UID: \"29595449-7701-47e2-af62-0638177bb673\") " pod="kube-system/storage-provisioner"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452618    1183 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsfgv\" (UniqueName: \"kubernetes.io/projected/6f72e1f3-4f2f-4f04-8b9b-c34beb774e08-kube-api-access-lsfgv\") pod \"busybox-65db55d5d6-4wm8f\" (UID: \"6f72e1f3-4f2f-4f04-8b9b-c34beb774e08\") " pod="default/busybox-65db55d5d6-4wm8f"
	Nov 07 17:12:17 multinode-090641 kubelet[1183]: I1107 17:12:17.452651    1183 reconciler.go:169] "Reconciler: start to sync state"
	Nov 07 17:12:18 multinode-090641 kubelet[1183]: I1107 17:12:18.596136    1183 request.go:682] Waited for 1.041511723s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token
	Nov 07 17:12:19 multinode-090641 kubelet[1183]: I1107 17:12:19.214813    1183 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8827a35b20e4aa66218a0916a4b49d400a5ebf54cb73019242eb34a568daac79"
	Nov 07 17:12:21 multinode-090641 kubelet[1183]: I1107 17:12:21.285754    1183 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Nov 07 17:12:24 multinode-090641 kubelet[1183]: I1107 17:12:24.699687    1183 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Nov 07 17:12:48 multinode-090641 kubelet[1183]: I1107 17:12:48.463996    1183 scope.go:115] "RemoveContainer" containerID="f8d0de33debd6d37c8ec290604d5319630509659243837fe8189e539464af244"
	Nov 07 17:12:48 multinode-090641 kubelet[1183]: I1107 17:12:48.464234    1183 scope.go:115] "RemoveContainer" containerID="05760ea85057cc3db9f498e8b175a63457660be7c052477341edba54c7f2c887"
	Nov 07 17:12:48 multinode-090641 kubelet[1183]: E1107 17:12:48.464344    1183 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(29595449-7701-47e2-af62-0638177bb673)\"" pod="kube-system/storage-provisioner" podUID=29595449-7701-47e2-af62-0638177bb673
	Nov 07 17:12:59 multinode-090641 kubelet[1183]: I1107 17:12:59.387113    1183 scope.go:115] "RemoveContainer" containerID="05760ea85057cc3db9f498e8b175a63457660be7c052477341edba54c7f2c887"
	
	* 
	* ==> storage-provisioner [05760ea85057] <==
	* I1107 17:12:18.134248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1107 17:12:48.118728       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [a6073532f4cf] <==
	* I1107 17:12:59.477516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 17:12:59.484512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 17:12:59.484731       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 17:13:16.881414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 17:13:16.881524       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"26eda56d-3693-4aae-b1c2-1f4def5f67f7", APIVersion:"v1", ResourceVersion:"1181", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-090641_cfc77b2b-5222-4640-ad1d-f695a852fa41 became leader
	I1107 17:13:16.881699       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-090641_cfc77b2b-5222-4640-ad1d-f695a852fa41!
	I1107 17:13:16.982073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-090641_cfc77b2b-5222-4640-ad1d-f695a852fa41!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-090641 -n multinode-090641
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-090641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-090641 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context multinode-090641 describe pod : exit status 1 (37.697333ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context multinode-090641 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/RestartMultiNode (183.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (49.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2233587279.exe start -p running-upgrade-092304 --memory=2200 --vm-driver=docker 
E1107 09:23:30.918736    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2233587279.exe start -p running-upgrade-092304 --memory=2200 --vm-driver=docker : exit status 70 (35.141389641s)

                                                
                                                
-- stdout --
	* [running-upgrade-092304] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3980018699
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:23:20.899671054 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-092304" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:23:37.551099977 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-092304", then "minikube start -p running-upgrade-092304 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.37 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 214.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 253.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 276.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 342.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 445.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 467.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:23:37.551099977 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2233587279.exe start -p running-upgrade-092304 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2233587279.exe start -p running-upgrade-092304 --memory=2200 --vm-driver=docker : exit status 70 (4.466573143s)

                                                
                                                
-- stdout --
	* [running-upgrade-092304] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2030002053
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-092304" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2233587279.exe start -p running-upgrade-092304 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2233587279.exe start -p running-upgrade-092304 --memory=2200 --vm-driver=docker : exit status 70 (4.276231701s)

                                                
                                                
-- stdout --
	* [running-upgrade-092304] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3220654150
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-092304" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2022-11-07 09:23:50.996276 -0800 PST m=+2338.328912830
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-092304
helpers_test.go:235: (dbg) docker inspect running-upgrade-092304:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "732c5fc6aadaa9f38e9c91641393d004da705537c29b11f3d1be361c55f94c4e",
	        "Created": "2022-11-07T17:23:29.08107382Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 148949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:23:29.313103455Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/732c5fc6aadaa9f38e9c91641393d004da705537c29b11f3d1be361c55f94c4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/732c5fc6aadaa9f38e9c91641393d004da705537c29b11f3d1be361c55f94c4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/732c5fc6aadaa9f38e9c91641393d004da705537c29b11f3d1be361c55f94c4e/hosts",
	        "LogPath": "/var/lib/docker/containers/732c5fc6aadaa9f38e9c91641393d004da705537c29b11f3d1be361c55f94c4e/732c5fc6aadaa9f38e9c91641393d004da705537c29b11f3d1be361c55f94c4e-json.log",
	        "Name": "/running-upgrade-092304",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-092304:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3917a63c7ee6ea490ac0584c4feede6c8b37e28f791207d1b8bf975b48fb2a4a-init/diff:/var/lib/docker/overlay2/ce5e11bfc992a1a8f6d985dac2da213fb12466da5bf4a846750e53e75ca3183d/diff:/var/lib/docker/overlay2/6d130a4743287998046b9d2870a4c3b1b72a54dc2be0dd13c60e6a09c7c6d1ab/diff:/var/lib/docker/overlay2/9e0fb86728306987a63c6c365925b606090799d462b09bb60358ac58d788d637/diff:/var/lib/docker/overlay2/91fc9d4187b857d7439368b0f8da43e64916b0898d95e6370c986c39d4476a24/diff:/var/lib/docker/overlay2/f68e41bc8e02432bc31112bab2d913106a4cccbcb0718ec701d12c47ce1a5e6b/diff:/var/lib/docker/overlay2/da0503bae01e0c2f1dcca1507753003e7fa4fab8b811ef4bd9f44b8df7810261/diff:/var/lib/docker/overlay2/eb8aa4c3ba69f570e899a8be5b6fe74f801b0f1881147bbd2eeb7c2ab24fe8a9/diff:/var/lib/docker/overlay2/4380dccd8f6d70b50817097ebe26188055e03da8bd2dbcd94d554a094a07eb8d/diff:/var/lib/docker/overlay2/18799f5eb3f65b38408a5d017386b46eb6cd3dc34d06e68c238a58d3613cb440/diff:/var/lib/docker/overlay2/290277
7b4b29e78962a2c1fba3462009d545930037f1e2e2b297f10095d650ba/diff:/var/lib/docker/overlay2/98a930db05510e1ab529d982ad487d7229a5391f411ee1bf9ca25bfddd6dd810/diff:/var/lib/docker/overlay2/909c11a5c7167fcb51e2180dac2c7233993b270265c034a1975211db9a03a8ab/diff:/var/lib/docker/overlay2/c16a51f54f38775409e86242c130cb2225e6b00bd48b894bb5100e32f56d00ca/diff:/var/lib/docker/overlay2/e8ffa0670460c44067365553ba50cb4acac0d10761dcb01e1cf31148251c540b/diff:/var/lib/docker/overlay2/ba4d5b5c688339adeb153b804f2999c19e5444d8da632f9ff13141b6fd5b1029/diff:/var/lib/docker/overlay2/126c7fcb83dbcfabdbebe29b6aa730b7ae685710d20a010ed79626a2db327da8/diff:/var/lib/docker/overlay2/90df5f99607695ad7b916b512f8bec4f605f0fadc8555f06dbfb4ee5bc0e5d52/diff:/var/lib/docker/overlay2/888a0498617efd125b3a4ff5a0ff12fe733ef971c2844d31903155060f6d99ae/diff:/var/lib/docker/overlay2/ca744951d704db9cc6afe8b68ef931777d66fee2e5f1977f89279dde940a8dc0/diff:/var/lib/docker/overlay2/7050445f9614af96ed42f0d0177701afc718172458c6a1b656c7cf4d7c03e026/diff:/var/lib/d
ocker/overlay2/f7d3f88486de80aa8b10c310182f2a33532c5e2722c2d933c8b36d68af65ab90/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3917a63c7ee6ea490ac0584c4feede6c8b37e28f791207d1b8bf975b48fb2a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3917a63c7ee6ea490ac0584c4feede6c8b37e28f791207d1b8bf975b48fb2a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3917a63c7ee6ea490ac0584c4feede6c8b37e28f791207d1b8bf975b48fb2a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-092304",
	                "Source": "/var/lib/docker/volumes/running-upgrade-092304/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-092304",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-092304",
	                "name.minikube.sigs.k8s.io": "running-upgrade-092304",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3760d97f9227fec1c53a8032f6d4c46542ada51f9d341f261bbdf0f13f811222",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52144"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52146"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3760d97f9227",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b8876e49a598b353e60b445cab34f93dfe10171d9564d2b6c36e3231b23ed773",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "dffd526401466b8f4ca8a2e3f88ae9d6ca71b5f97245b6f3f21da95307666e43",
	                    "EndpointID": "b8876e49a598b353e60b445cab34f93dfe10171d9564d2b6c36e3231b23ed773",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-092304 -n running-upgrade-092304
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-092304 -n running-upgrade-092304: exit status 6 (385.110402ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:23:51.429219   12728 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-092304" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-092304" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-092304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-092304
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-092304: (2.29961633s)
--- FAIL: TestRunningBinaryUpgrade (49.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (553.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.570682796s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-092150] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-092150 in cluster kubernetes-upgrade-092150
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 09:21:50.881652   11762 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:21:50.882503   11762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:21:50.882514   11762 out.go:309] Setting ErrFile to fd 2...
	I1107 09:21:50.882521   11762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:21:50.882791   11762 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:21:50.884014   11762 out.go:303] Setting JSON to false
	I1107 09:21:50.909248   11762 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3085,"bootTime":1667838625,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 09:21:50.909377   11762 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 09:21:50.937078   11762 out.go:177] * [kubernetes-upgrade-092150] minikube v1.28.0 on Darwin 13.0
	I1107 09:21:50.956967   11762 notify.go:220] Checking for updates...
	I1107 09:21:50.978360   11762 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 09:21:51.062066   11762 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:21:51.122356   11762 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 09:21:51.166375   11762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 09:21:51.210166   11762 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 09:21:51.233471   11762 config.go:180] Loaded profile config "missing-upgrade-092105": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1107 09:21:51.233585   11762 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 09:21:51.298550   11762 docker.go:137] docker version: linux-20.10.20
	I1107 09:21:51.298719   11762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:21:51.442300   11762 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:50 SystemTime:2022-11-07 17:21:51.367454074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:21:51.482376   11762 out.go:177] * Using the docker driver based on user configuration
	I1107 09:21:51.519063   11762 start.go:282] selected driver: docker
	I1107 09:21:51.519090   11762 start.go:808] validating driver "docker" against <nil>
	I1107 09:21:51.519115   11762 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 09:21:51.523156   11762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:21:51.667596   11762 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2022-11-07 17:21:51.594146225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:21:51.667710   11762 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 09:21:51.667853   11762 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 09:21:51.705818   11762 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 09:21:51.728329   11762 cni.go:95] Creating CNI manager for ""
	I1107 09:21:51.728346   11762 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:21:51.728357   11762 start_flags.go:317] config:
	{Name:kubernetes-upgrade-092150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:21:51.749267   11762 out.go:177] * Starting control plane node kubernetes-upgrade-092150 in cluster kubernetes-upgrade-092150
	I1107 09:21:51.807657   11762 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:21:51.829466   11762 out.go:177] * Pulling base image ...
	I1107 09:21:51.887323   11762 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:21:51.887355   11762 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:21:51.887398   11762 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 09:21:51.887418   11762 cache.go:57] Caching tarball of preloaded images
	I1107 09:21:51.887881   11762 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:21:51.887991   11762 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1107 09:21:51.888256   11762 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/config.json ...
	I1107 09:21:51.888307   11762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/config.json: {Name:mk3d6eb7b2d4d3f1a8b2003d339eb69d03f2838f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:21:51.944505   11762 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:21:51.944528   11762 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:21:51.944540   11762 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:21:51.944598   11762 start.go:364] acquiring machines lock for kubernetes-upgrade-092150: {Name:mk1bf278369d6976e7baf3f1db311665af2b3f19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:21:51.944788   11762 start.go:368] acquired machines lock for "kubernetes-upgrade-092150" in 174.709µs
	I1107 09:21:51.944829   11762 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-092150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/b
in/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 09:21:51.944895   11762 start.go:125] createHost starting for "" (driver="docker")
	I1107 09:21:51.968105   11762 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 09:21:51.968540   11762 start.go:159] libmachine.API.Create for "kubernetes-upgrade-092150" (driver="docker")
	I1107 09:21:51.968599   11762 client.go:168] LocalClient.Create starting
	I1107 09:21:51.968765   11762 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem
	I1107 09:21:51.968843   11762 main.go:134] libmachine: Decoding PEM data...
	I1107 09:21:51.968874   11762 main.go:134] libmachine: Parsing certificate...
	I1107 09:21:51.968986   11762 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem
	I1107 09:21:51.969068   11762 main.go:134] libmachine: Decoding PEM data...
	I1107 09:21:51.969085   11762 main.go:134] libmachine: Parsing certificate...
	I1107 09:21:51.969802   11762 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-092150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 09:21:52.027597   11762 cli_runner.go:211] docker network inspect kubernetes-upgrade-092150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 09:21:52.027695   11762 network_create.go:272] running [docker network inspect kubernetes-upgrade-092150] to gather additional debugging logs...
	I1107 09:21:52.027713   11762 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-092150
	W1107 09:21:52.084000   11762 cli_runner.go:211] docker network inspect kubernetes-upgrade-092150 returned with exit code 1
	I1107 09:21:52.084026   11762 network_create.go:275] error running [docker network inspect kubernetes-upgrade-092150]: docker network inspect kubernetes-upgrade-092150: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-092150
	I1107 09:21:52.084040   11762 network_create.go:277] output of [docker network inspect kubernetes-upgrade-092150]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-092150
	
	** /stderr **
	I1107 09:21:52.084154   11762 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 09:21:52.139360   11762 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000f84320] misses:0}
	I1107 09:21:52.139404   11762 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:21:52.139418   11762 network_create.go:115] attempt to create docker network kubernetes-upgrade-092150 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 09:21:52.139508   11762 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 kubernetes-upgrade-092150
	W1107 09:21:52.194262   11762 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 kubernetes-upgrade-092150 returned with exit code 1
	W1107 09:21:52.194306   11762 network_create.go:107] failed to create docker network kubernetes-upgrade-092150 192.168.49.0/24, will retry: subnet is taken
	I1107 09:21:52.194573   11762 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f84320] amended:false}} dirty:map[] misses:0}
	I1107 09:21:52.194589   11762 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:21:52.194804   11762 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f84320] amended:true}} dirty:map[192.168.49.0:0xc000f84320 192.168.58.0:0xc0004a4498] misses:0}
	I1107 09:21:52.194827   11762 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:21:52.194840   11762 network_create.go:115] attempt to create docker network kubernetes-upgrade-092150 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 09:21:52.194932   11762 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 kubernetes-upgrade-092150
	W1107 09:21:52.251587   11762 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 kubernetes-upgrade-092150 returned with exit code 1
	W1107 09:21:52.251622   11762 network_create.go:107] failed to create docker network kubernetes-upgrade-092150 192.168.58.0/24, will retry: subnet is taken
	I1107 09:21:52.251953   11762 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f84320] amended:true}} dirty:map[192.168.49.0:0xc000f84320 192.168.58.0:0xc0004a4498] misses:1}
	I1107 09:21:52.251971   11762 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:21:52.252195   11762 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f84320] amended:true}} dirty:map[192.168.49.0:0xc000f84320 192.168.58.0:0xc0004a4498 192.168.67.0:0xc000400350] misses:1}
	I1107 09:21:52.252206   11762 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:21:52.252214   11762 network_create.go:115] attempt to create docker network kubernetes-upgrade-092150 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 09:21:52.252297   11762 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 kubernetes-upgrade-092150
	I1107 09:21:52.340310   11762 network_create.go:99] docker network kubernetes-upgrade-092150 192.168.67.0/24 created
	I1107 09:21:52.340347   11762 kic.go:106] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-092150" container
	I1107 09:21:52.340469   11762 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 09:21:52.396414   11762 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-092150 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 --label created_by.minikube.sigs.k8s.io=true
	I1107 09:21:52.453023   11762 oci.go:103] Successfully created a docker volume kubernetes-upgrade-092150
	I1107 09:21:52.453174   11762 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-092150-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 --entrypoint /usr/bin/test -v kubernetes-upgrade-092150:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 09:21:52.910150   11762 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-092150
	I1107 09:21:52.910189   11762 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:21:52.910223   11762 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 09:21:52.910373   11762 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-092150:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 09:21:57.577653   11762 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-092150:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.667076784s)
	I1107 09:21:57.577681   11762 kic.go:188] duration metric: took 4.667324 seconds to extract preloaded images to volume
	I1107 09:21:57.577806   11762 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 09:21:57.723684   11762 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-092150 --name kubernetes-upgrade-092150 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-092150 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-092150 --network kubernetes-upgrade-092150 --ip 192.168.67.2 --volume kubernetes-upgrade-092150:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 09:21:58.090013   11762 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Running}}
	I1107 09:21:58.157949   11762 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Status}}
	I1107 09:21:58.223806   11762 cli_runner.go:164] Run: docker exec kubernetes-upgrade-092150 stat /var/lib/dpkg/alternatives/iptables
	I1107 09:21:58.356381   11762 oci.go:144] the created container "kubernetes-upgrade-092150" has a running status.
	I1107 09:21:58.356416   11762 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa...
	I1107 09:21:58.459125   11762 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 09:21:58.589938   11762 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Status}}
	I1107 09:21:58.652759   11762 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 09:21:58.652776   11762 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-092150 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 09:21:58.767044   11762 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Status}}
	I1107 09:21:58.826162   11762 machine.go:88] provisioning docker machine ...
	I1107 09:21:58.826198   11762 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-092150"
	I1107 09:21:58.826322   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:21:58.887091   11762 main.go:134] libmachine: Using SSH client type: native
	I1107 09:21:58.887291   11762 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1107 09:21:58.887305   11762 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-092150 && echo "kubernetes-upgrade-092150" | sudo tee /etc/hostname
	I1107 09:21:59.023910   11762 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-092150
	
	I1107 09:21:59.024038   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:21:59.088136   11762 main.go:134] libmachine: Using SSH client type: native
	I1107 09:21:59.088376   11762 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1107 09:21:59.088399   11762 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-092150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-092150/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-092150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:21:59.219203   11762 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:21:59.219247   11762 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:21:59.219291   11762 ubuntu.go:177] setting up certificates
	I1107 09:21:59.219317   11762 provision.go:83] configureAuth start
	I1107 09:21:59.219435   11762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-092150
	I1107 09:21:59.290447   11762 provision.go:138] copyHostCerts
	I1107 09:21:59.290542   11762 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:21:59.290551   11762 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:21:59.290692   11762 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:21:59.290970   11762 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:21:59.290977   11762 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:21:59.291059   11762 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:21:59.291215   11762 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:21:59.291223   11762 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:21:59.291306   11762 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:21:59.291428   11762 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-092150 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-092150]
	I1107 09:21:59.400835   11762 provision.go:172] copyRemoteCerts
	I1107 09:21:59.400914   11762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:21:59.401014   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:21:59.466689   11762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:21:59.556334   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:21:59.576584   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1107 09:21:59.597557   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 09:21:59.621838   11762 provision.go:86] duration metric: configureAuth took 402.487241ms
	I1107 09:21:59.621859   11762 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:21:59.622041   11762 config.go:180] Loaded profile config "kubernetes-upgrade-092150": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1107 09:21:59.622162   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:21:59.690836   11762 main.go:134] libmachine: Using SSH client type: native
	I1107 09:21:59.690999   11762 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1107 09:21:59.691012   11762 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:21:59.812124   11762 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:21:59.812148   11762 ubuntu.go:71] root file system type: overlay
	I1107 09:21:59.812383   11762 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:21:59.812505   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:21:59.877641   11762 main.go:134] libmachine: Using SSH client type: native
	I1107 09:21:59.877810   11762 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1107 09:21:59.877858   11762 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:22:00.006354   11762 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:22:00.006496   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:22:00.074324   11762 main.go:134] libmachine: Using SSH client type: native
	I1107 09:22:00.074484   11762 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52010 <nil> <nil>}
	I1107 09:22:00.074497   11762 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:22:00.665610   11762 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:22:00.007698504 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 09:22:00.665636   11762 machine.go:91] provisioned docker machine in 1.839401073s
	I1107 09:22:00.665667   11762 client.go:171] LocalClient.Create took 8.69679744s
	I1107 09:22:00.665690   11762 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-092150" took 8.696892835s
	I1107 09:22:00.665701   11762 start.go:300] post-start starting for "kubernetes-upgrade-092150" (driver="docker")
	I1107 09:22:00.665706   11762 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:22:00.665800   11762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:22:00.665875   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:22:00.724225   11762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:22:00.808855   11762 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:22:00.812339   11762 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:22:00.812353   11762 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:22:00.812366   11762 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:22:00.812371   11762 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:22:00.812380   11762 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:22:00.812474   11762 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:22:00.812653   11762 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:22:00.812827   11762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:22:00.820201   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:22:00.837549   11762 start.go:303] post-start completed in 171.829562ms
	I1107 09:22:00.838157   11762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-092150
	I1107 09:22:00.919669   11762 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/config.json ...
	I1107 09:22:00.920151   11762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:22:00.920228   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:22:00.976454   11762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:22:01.060306   11762 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:22:01.065045   11762 start.go:128] duration metric: createHost completed in 9.119864229s
	I1107 09:22:01.065074   11762 start.go:83] releasing machines lock for "kubernetes-upgrade-092150", held for 9.119999943s
	I1107 09:22:01.065237   11762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-092150
	I1107 09:22:01.123840   11762 ssh_runner.go:195] Run: systemctl --version
	I1107 09:22:01.123841   11762 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1107 09:22:01.123920   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:22:01.123939   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:22:01.186520   11762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:22:01.186730   11762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52010 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:22:01.525525   11762 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:22:01.536029   11762 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:22:01.536102   11762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:22:01.545796   11762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:22:01.558911   11762 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:22:01.624320   11762 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:22:01.689072   11762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:22:01.752751   11762 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:22:01.953688   11762 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:22:01.982391   11762 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:22:02.057219   11762 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1107 09:22:02.057421   11762 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-092150 dig +short host.docker.internal
	I1107 09:22:02.174868   11762 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:22:02.174995   11762 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:22:02.179179   11762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:22:02.189074   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:22:02.247970   11762 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:22:02.248058   11762 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:22:02.270545   11762 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:22:02.270564   11762 docker.go:543] Images already preloaded, skipping extraction
	I1107 09:22:02.270665   11762 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:22:02.293963   11762 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:22:02.293982   11762 cache_images.go:84] Images are preloaded, skipping loading
	I1107 09:22:02.294089   11762 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:22:02.362510   11762 cni.go:95] Creating CNI manager for ""
	I1107 09:22:02.362525   11762 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:22:02.362536   11762 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:22:02.362550   11762 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-092150 NodeName:kubernetes-upgrade-092150 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:22:02.362664   11762 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-092150"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-092150
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:22:02.362752   11762 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-092150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:22:02.362835   11762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1107 09:22:02.370538   11762 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:22:02.370601   11762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 09:22:02.377811   11762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I1107 09:22:02.390628   11762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:22:02.404303   11762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1107 09:22:02.417048   11762 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:22:02.420968   11762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:22:02.430332   11762 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150 for IP: 192.168.67.2
	I1107 09:22:02.430467   11762 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:22:02.430545   11762 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:22:02.430595   11762 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.key
	I1107 09:22:02.430615   11762 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.crt with IP's: []
	I1107 09:22:02.546114   11762 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.crt ...
	I1107 09:22:02.546137   11762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.crt: {Name:mkfe6d4515299d09f37fa1aceae2a6bf678c4936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:22:02.546452   11762 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.key ...
	I1107 09:22:02.546462   11762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.key: {Name:mkd371ec4cade317c709bff17f8822e55e5574d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:22:02.546689   11762 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key.c7fa3a9e
	I1107 09:22:02.546711   11762 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 09:22:02.683737   11762 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.crt.c7fa3a9e ...
	I1107 09:22:02.683751   11762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.crt.c7fa3a9e: {Name:mkc5ac6abb0f9c467c99afceb461101e32afb1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:22:02.684043   11762 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key.c7fa3a9e ...
	I1107 09:22:02.684051   11762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key.c7fa3a9e: {Name:mk2d025fa528e6a9a63dfce05e636f1dab3d6c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:22:02.684248   11762 certs.go:320] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.crt
	I1107 09:22:02.684430   11762 certs.go:324] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key
	I1107 09:22:02.684609   11762 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.key
	I1107 09:22:02.684629   11762 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.crt with IP's: []
	I1107 09:22:02.736331   11762 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.crt ...
	I1107 09:22:02.736345   11762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.crt: {Name:mkd881344b5334c4118c55bd799b210b3e6a7f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:22:02.736625   11762 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.key ...
	I1107 09:22:02.736634   11762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.key: {Name:mk8b2be477302b452f003e5ed8a83beca536edf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:22:02.737084   11762 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:22:02.737142   11762 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:22:02.737159   11762 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:22:02.737198   11762 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:22:02.737232   11762 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:22:02.737269   11762 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:22:02.737346   11762 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:22:02.737855   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 09:22:02.756154   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 09:22:02.773198   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 09:22:02.789964   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 09:22:02.807265   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:22:02.824385   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:22:02.842111   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:22:02.859353   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:22:02.876279   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:22:02.893937   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:22:02.911227   11762 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:22:02.928760   11762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 09:22:02.941379   11762 ssh_runner.go:195] Run: openssl version
	I1107 09:22:02.947558   11762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:22:02.955771   11762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:22:02.959379   11762 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:22:02.959437   11762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:22:02.964911   11762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:22:02.972396   11762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:22:02.980089   11762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:22:02.983782   11762 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:22:02.983838   11762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:22:02.988768   11762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:22:02.996775   11762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:22:03.004734   11762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:22:03.008738   11762 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:22:03.008791   11762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:22:03.014189   11762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:22:03.021747   11762 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-092150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:22:03.021862   11762 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:22:03.043847   11762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 09:22:03.051319   11762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:22:03.058674   11762 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 09:22:03.058738   11762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:22:03.065886   11762 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 09:22:03.065911   11762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 09:22:03.112254   11762 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1107 09:22:03.112342   11762 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 09:22:03.402735   11762 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 09:22:03.402816   11762 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 09:22:03.402905   11762 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 09:22:03.627002   11762 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:22:03.627854   11762 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:22:03.635095   11762 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1107 09:22:03.705620   11762 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:22:03.748308   11762 out.go:204]   - Generating certificates and keys ...
	I1107 09:22:03.748413   11762 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 09:22:03.748515   11762 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 09:22:03.968120   11762 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 09:22:04.058351   11762 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 09:22:04.538254   11762 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 09:22:04.738552   11762 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 09:22:05.039678   11762 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 09:22:05.040043   11762 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-092150 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1107 09:22:05.200391   11762 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 09:22:05.200593   11762 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-092150 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1107 09:22:05.374578   11762 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 09:22:05.518134   11762 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 09:22:05.715308   11762 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 09:22:05.715383   11762 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:22:05.886886   11762 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 09:22:06.096951   11762 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 09:22:06.210776   11762 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:22:06.932530   11762 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:22:06.934158   11762 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:22:06.957773   11762 out.go:204]   - Booting up control plane ...
	I1107 09:22:06.957860   11762 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:22:06.957941   11762 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:22:06.958014   11762 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:22:06.958087   11762 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:22:06.958204   11762 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 09:22:46.925776   11762 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:22:46.926678   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:22:46.926897   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:22:51.925287   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:22:51.925503   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:23:01.918754   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:23:01.918913   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:23:21.906280   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:23:21.906419   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:24:01.879770   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:24:01.879925   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:24:01.879953   11762 kubeadm.go:317] 
	I1107 09:24:01.879994   11762 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:24:01.880051   11762 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:24:01.880059   11762 kubeadm.go:317] 
	I1107 09:24:01.880086   11762 kubeadm.go:317] This error is likely caused by:
	I1107 09:24:01.880139   11762 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:24:01.880248   11762 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:24:01.880259   11762 kubeadm.go:317] 
	I1107 09:24:01.880331   11762 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:24:01.880352   11762 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:24:01.880379   11762 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:24:01.880386   11762 kubeadm.go:317] 
	I1107 09:24:01.880510   11762 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:24:01.880579   11762 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:24:01.880643   11762 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:24:01.880690   11762 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:24:01.880751   11762 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:24:01.880779   11762 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:24:01.883785   11762 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:24:01.883897   11762 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:24:01.883997   11762 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:24:01.884062   11762 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:24:01.884126   11762 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1107 09:24:01.884322   11762 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-092150 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-092150 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-092150 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-092150 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 09:24:01.884351   11762 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1107 09:24:02.305065   11762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:24:02.314769   11762 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 09:24:02.314834   11762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:24:02.322711   11762 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 09:24:02.322731   11762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 09:24:02.370753   11762 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1107 09:24:02.370802   11762 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 09:24:02.676776   11762 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 09:24:02.676866   11762 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 09:24:02.676963   11762 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 09:24:02.935272   11762 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:24:02.936761   11762 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:24:02.945230   11762 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1107 09:24:03.002213   11762 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:24:03.045596   11762 out.go:204]   - Generating certificates and keys ...
	I1107 09:24:03.045702   11762 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 09:24:03.045793   11762 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 09:24:03.045875   11762 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 09:24:03.045965   11762 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 09:24:03.046109   11762 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 09:24:03.046177   11762 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 09:24:03.046259   11762 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 09:24:03.046390   11762 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 09:24:03.046513   11762 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 09:24:03.046622   11762 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 09:24:03.046680   11762 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 09:24:03.046751   11762 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:24:03.200488   11762 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 09:24:03.521905   11762 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 09:24:03.755861   11762 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:24:03.823873   11762 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:24:03.824662   11762 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:24:03.845773   11762 out.go:204]   - Booting up control plane ...
	I1107 09:24:03.845962   11762 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:24:03.846092   11762 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:24:03.846204   11762 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:24:03.846313   11762 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:24:03.846569   11762 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 09:24:43.812488   11762 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:24:43.812793   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:24:43.812980   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:24:48.811447   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:24:48.811657   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:24:58.804451   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:24:58.804610   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:25:18.791959   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:25:18.792135   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:25:58.765513   11762 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:25:58.765765   11762 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:25:58.765780   11762 kubeadm.go:317] 
	I1107 09:25:58.765817   11762 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:25:58.765856   11762 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:25:58.765862   11762 kubeadm.go:317] 
	I1107 09:25:58.765896   11762 kubeadm.go:317] This error is likely caused by:
	I1107 09:25:58.765939   11762 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:25:58.766044   11762 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:25:58.766051   11762 kubeadm.go:317] 
	I1107 09:25:58.766201   11762 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:25:58.766258   11762 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:25:58.766304   11762 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:25:58.766315   11762 kubeadm.go:317] 
	I1107 09:25:58.766445   11762 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:25:58.766579   11762 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:25:58.766650   11762 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:25:58.766703   11762 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:25:58.766760   11762 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:25:58.766793   11762 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:25:58.769393   11762 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:25:58.769495   11762 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:25:58.769595   11762 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:25:58.769665   11762 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:25:58.769746   11762 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 09:25:58.769771   11762 kubeadm.go:398] StartCluster complete in 3m55.740958979s
	I1107 09:25:58.769871   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:25:58.792470   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.792482   11762 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:25:58.792589   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:25:58.816499   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.816511   11762 logs.go:276] No container was found matching "etcd"
	I1107 09:25:58.816591   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:25:58.842582   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.842594   11762 logs.go:276] No container was found matching "coredns"
	I1107 09:25:58.842675   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:25:58.870441   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.870459   11762 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:25:58.870543   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:25:58.893923   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.893938   11762 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:25:58.894026   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:25:58.918164   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.918177   11762 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:25:58.918259   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:25:58.942512   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.942526   11762 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:25:58.942607   11762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:25:58.968064   11762 logs.go:274] 0 containers: []
	W1107 09:25:58.968079   11762 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:25:58.968090   11762 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:25:58.968100   11762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:25:59.036335   11762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:25:59.036349   11762 logs.go:123] Gathering logs for Docker ...
	I1107 09:25:59.036356   11762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:25:59.053861   11762 logs.go:123] Gathering logs for container status ...
	I1107 09:25:59.053877   11762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:26:01.104979   11762 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051028329s)
	I1107 09:26:01.105091   11762 logs.go:123] Gathering logs for kubelet ...
	I1107 09:26:01.105098   11762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:26:01.143847   11762 logs.go:123] Gathering logs for dmesg ...
	I1107 09:26:01.143864   11762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1107 09:26:01.157112   11762 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 09:26:01.157132   11762 out.go:239] * 
	* 
	W1107 09:26:01.157226   11762 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:26:01.157242   11762 out.go:239] * 
	* 
	W1107 09:26:01.157849   11762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:26:01.221850   11762 out.go:177] 
	W1107 09:26:01.264033   11762 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:26:01.264148   11762 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 09:26:01.264207   11762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 09:26:01.305999   11762 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-092150

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-092150: (1.681595685s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-092150 status --format={{.Host}}

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-092150 status --format={{.Host}}: exit status 7 (257.507396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 
E1107 09:26:06.281987    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (4m34.177231556s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-092150 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (493.721428ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-092150] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-092150
	    minikube start -p kubernetes-upgrade-092150 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0921502 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-092150 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 
E1107 09:30:38.268716    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-092150 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (19.826547649s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-11-07 09:30:57.906789 -0800 PST m=+2765.226620046
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-092150
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-092150:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d1b7fbcb9ddcf6b7215d1c756e61b48da7e486b8c8c3781d62aa800ec7ae933",
	        "Created": "2022-11-07T17:21:57.781361326Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163711,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:26:04.922209886Z",
	            "FinishedAt": "2022-11-07T17:26:01.991919513Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/3d1b7fbcb9ddcf6b7215d1c756e61b48da7e486b8c8c3781d62aa800ec7ae933/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d1b7fbcb9ddcf6b7215d1c756e61b48da7e486b8c8c3781d62aa800ec7ae933/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d1b7fbcb9ddcf6b7215d1c756e61b48da7e486b8c8c3781d62aa800ec7ae933/hosts",
	        "LogPath": "/var/lib/docker/containers/3d1b7fbcb9ddcf6b7215d1c756e61b48da7e486b8c8c3781d62aa800ec7ae933/3d1b7fbcb9ddcf6b7215d1c756e61b48da7e486b8c8c3781d62aa800ec7ae933-json.log",
	        "Name": "/kubernetes-upgrade-092150",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-092150:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-092150",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d2bb162731521ed1232c50a5758d2d7a65d38d5675a0c81d663e8aef36aacb1-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d2bb162731521ed1232c50a5758d2d7a65d38d5675a0c81d663e8aef36aacb1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d2bb162731521ed1232c50a5758d2d7a65d38d5675a0c81d663e8aef36aacb1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d2bb162731521ed1232c50a5758d2d7a65d38d5675a0c81d663e8aef36aacb1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-092150",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-092150/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-092150",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-092150",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-092150",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1de6c380ec9e3379ffbae52df777ff24d3763a83327e2686f161b068317c647",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52355"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52351"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52352"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52353"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f1de6c380ec9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-092150": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3d1b7fbcb9dd",
	                        "kubernetes-upgrade-092150"
	                    ],
	                    "NetworkID": "22e8a84beb9985404d07a340caef677e387e4f032bec0e33c8f0f71a2090e0e5",
	                    "EndpointID": "0389f55fcabd72106ebb948a3ea86d2870bd02c1f7667a66f7fdcf40cb9796da",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-092150 -n kubernetes-upgrade-092150
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-092150 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-092150 logs -n 25: (2.692051191s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| profile | list --output json             | minikube                  | jenkins | v1.28.0 | 07 Nov 22 09:25 PST | 07 Nov 22 09:25 PST |
	| delete  | -p pause-092353                | pause-092353              | jenkins | v1.28.0 | 07 Nov 22 09:25 PST | 07 Nov 22 09:25 PST |
	| start   | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:25 PST |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:25 PST | 07 Nov 22 09:26 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-092150   | kubernetes-upgrade-092150 | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	| start   | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-092150   | kubernetes-upgrade-092150 | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:30 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	| start   | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-092534 sudo    | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	| profile | list --output=json             | minikube                  | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	| stop    | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	| start   | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-092534 sudo    | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-092534         | NoKubernetes-092534       | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:26 PST |
	| start   | -p force-systemd-flag-092652   | force-systemd-flag-092652 | jenkins | v1.28.0 | 07 Nov 22 09:26 PST | 07 Nov 22 09:27 PST |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-092652      | force-systemd-flag-092652 | jenkins | v1.28.0 | 07 Nov 22 09:27 PST | 07 Nov 22 09:27 PST |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-092652   | force-systemd-flag-092652 | jenkins | v1.28.0 | 07 Nov 22 09:27 PST | 07 Nov 22 09:27 PST |
	| start   | -p force-systemd-env-092749    | force-systemd-env-092749  | jenkins | v1.28.0 | 07 Nov 22 09:27 PST | 07 Nov 22 09:28 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-092749       | force-systemd-env-092749  | jenkins | v1.28.0 | 07 Nov 22 09:28 PST | 07 Nov 22 09:28 PST |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-092749    | force-systemd-env-092749  | jenkins | v1.28.0 | 07 Nov 22 09:28 PST | 07 Nov 22 09:28 PST |
	| start   | -p cert-expiration-092821      | cert-expiration-092821    | jenkins | v1.28.0 | 07 Nov 22 09:28 PST | 07 Nov 22 09:28 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-092150   | kubernetes-upgrade-092150 | jenkins | v1.28.0 | 07 Nov 22 09:30 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-092150   | kubernetes-upgrade-092150 | jenkins | v1.28.0 | 07 Nov 22 09:30 PST | 07 Nov 22 09:30 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 09:30:38
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 09:30:38.133519   14510 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:30:38.133697   14510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:30:38.133703   14510 out.go:309] Setting ErrFile to fd 2...
	I1107 09:30:38.133707   14510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:30:38.133813   14510 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:30:38.134317   14510 out.go:303] Setting JSON to false
	I1107 09:30:38.153063   14510 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3613,"bootTime":1667838625,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 09:30:38.153168   14510 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 09:30:38.174735   14510 out.go:177] * [kubernetes-upgrade-092150] minikube v1.28.0 on Darwin 13.0
	I1107 09:30:38.217853   14510 notify.go:220] Checking for updates...
	I1107 09:30:38.239577   14510 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 09:30:38.261339   14510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:30:38.303455   14510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 09:30:38.345356   14510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 09:30:38.387180   14510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 09:30:38.408909   14510 config.go:180] Loaded profile config "kubernetes-upgrade-092150": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:30:38.409407   14510 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 09:30:38.471080   14510 docker.go:137] docker version: linux-20.10.20
	I1107 09:30:38.471237   14510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:30:38.614211   14510 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:57 SystemTime:2022-11-07 17:30:38.533245247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:30:38.636234   14510 out.go:177] * Using the docker driver based on existing profile
	I1107 09:30:38.657684   14510 start.go:282] selected driver: docker
	I1107 09:30:38.657730   14510 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-092150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:30:38.657838   14510 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 09:30:38.661527   14510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:30:38.804558   14510 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:57 SystemTime:2022-11-07 17:30:38.723242428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:30:38.804705   14510 cni.go:95] Creating CNI manager for ""
	I1107 09:30:38.804718   14510 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:30:38.804737   14510 start_flags.go:317] config:
	{Name:kubernetes-upgrade-092150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:30:38.880275   14510 out.go:177] * Starting control plane node kubernetes-upgrade-092150 in cluster kubernetes-upgrade-092150
	I1107 09:30:38.917158   14510 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:30:38.938302   14510 out.go:177] * Pulling base image ...
	I1107 09:30:38.980041   14510 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:30:38.980060   14510 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:30:38.980093   14510 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 09:30:38.980104   14510 cache.go:57] Caching tarball of preloaded images
	I1107 09:30:38.980237   14510 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:30:38.980248   14510 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 09:30:38.980775   14510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/config.json ...
	I1107 09:30:39.035380   14510 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:30:39.035394   14510 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:30:39.035404   14510 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:30:39.035442   14510 start.go:364] acquiring machines lock for kubernetes-upgrade-092150: {Name:mk1bf278369d6976e7baf3f1db311665af2b3f19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:30:39.035530   14510 start.go:368] acquired machines lock for "kubernetes-upgrade-092150" in 68.958µs
	I1107 09:30:39.035555   14510 start.go:96] Skipping create...Using existing machine configuration
	I1107 09:30:39.035573   14510 fix.go:55] fixHost starting: 
	I1107 09:30:39.035821   14510 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Status}}
	I1107 09:30:39.093500   14510 fix.go:103] recreateIfNeeded on kubernetes-upgrade-092150: state=Running err=<nil>
	W1107 09:30:39.093536   14510 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 09:30:39.115515   14510 out.go:177] * Updating the running docker "kubernetes-upgrade-092150" container ...
	I1107 09:30:39.174055   14510 machine.go:88] provisioning docker machine ...
	I1107 09:30:39.174148   14510 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-092150"
	I1107 09:30:39.174325   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:39.233660   14510 main.go:134] libmachine: Using SSH client type: native
	I1107 09:30:39.233873   14510 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52354 <nil> <nil>}
	I1107 09:30:39.233887   14510 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-092150 && echo "kubernetes-upgrade-092150" | sudo tee /etc/hostname
	I1107 09:30:39.358938   14510 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-092150
	
	I1107 09:30:39.359054   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:39.417011   14510 main.go:134] libmachine: Using SSH client type: native
	I1107 09:30:39.417176   14510 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52354 <nil> <nil>}
	I1107 09:30:39.417191   14510 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-092150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-092150/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-092150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:30:39.532932   14510 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:30:39.532953   14510 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:30:39.532971   14510 ubuntu.go:177] setting up certificates
	I1107 09:30:39.532982   14510 provision.go:83] configureAuth start
	I1107 09:30:39.533063   14510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-092150
	I1107 09:30:39.590750   14510 provision.go:138] copyHostCerts
	I1107 09:30:39.590858   14510 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:30:39.590874   14510 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:30:39.590981   14510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:30:39.591207   14510 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:30:39.591213   14510 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:30:39.591281   14510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:30:39.591989   14510 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:30:39.592077   14510 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:30:39.592201   14510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:30:39.592597   14510 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-092150 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-092150]
	I1107 09:30:39.715497   14510 provision.go:172] copyRemoteCerts
	I1107 09:30:39.715563   14510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:30:39.715627   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:39.773977   14510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52354 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:30:39.860409   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:30:39.878608   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1107 09:30:39.898191   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 09:30:39.916707   14510 provision.go:86] duration metric: configureAuth took 383.70012ms
	I1107 09:30:39.916720   14510 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:30:39.916875   14510 config.go:180] Loaded profile config "kubernetes-upgrade-092150": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:30:39.916955   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:39.975447   14510 main.go:134] libmachine: Using SSH client type: native
	I1107 09:30:39.975657   14510 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52354 <nil> <nil>}
	I1107 09:30:39.975667   14510 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:30:40.093279   14510 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:30:40.093290   14510 ubuntu.go:71] root file system type: overlay
	I1107 09:30:40.093424   14510 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:30:40.093519   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:40.151448   14510 main.go:134] libmachine: Using SSH client type: native
	I1107 09:30:40.151596   14510 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52354 <nil> <nil>}
	I1107 09:30:40.151647   14510 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:30:40.277275   14510 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:30:40.277404   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:40.335985   14510 main.go:134] libmachine: Using SSH client type: native
	I1107 09:30:40.336141   14510 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 52354 <nil> <nil>}
	I1107 09:30:40.336155   14510 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:30:40.456337   14510 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:30:40.456355   14510 machine.go:91] provisioned docker machine in 1.282205926s
	I1107 09:30:40.456365   14510 start.go:300] post-start starting for "kubernetes-upgrade-092150" (driver="docker")
	I1107 09:30:40.456372   14510 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:30:40.456451   14510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:30:40.456525   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:40.513866   14510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52354 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:30:40.600635   14510 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:30:40.604413   14510 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:30:40.604430   14510 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:30:40.604438   14510 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:30:40.604443   14510 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:30:40.604458   14510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:30:40.604570   14510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:30:40.604766   14510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:30:40.604971   14510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:30:40.612886   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:30:40.630335   14510 start.go:303] post-start completed in 173.951125ms
	I1107 09:30:40.630452   14510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:30:40.630525   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:40.689603   14510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52354 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:30:40.772759   14510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:30:40.777319   14510 fix.go:57] fixHost completed within 1.741701917s
	I1107 09:30:40.777331   14510 start.go:83] releasing machines lock for "kubernetes-upgrade-092150", held for 1.74174112s
	I1107 09:30:40.777435   14510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-092150
	I1107 09:30:40.835354   14510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 09:30:40.835371   14510 ssh_runner.go:195] Run: systemctl --version
	I1107 09:30:40.835440   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:40.835444   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:40.896065   14510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52354 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:30:40.896099   14510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52354 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:30:41.038426   14510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:30:41.048204   14510 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:30:41.048277   14510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:30:41.057687   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:30:41.074724   14510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:30:41.169114   14510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:30:41.253751   14510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:30:41.339373   14510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:30:43.512046   14510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.172588584s)
	I1107 09:30:43.512127   14510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 09:30:43.582989   14510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:30:43.656651   14510 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 09:30:43.666093   14510 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 09:30:43.666210   14510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 09:30:43.670194   14510 start.go:472] Will wait 60s for crictl version
	I1107 09:30:43.670252   14510 ssh_runner.go:195] Run: sudo crictl version
	I1107 09:30:43.699913   14510 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 09:30:43.700007   14510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:30:43.735300   14510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:30:43.808867   14510 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 09:30:43.809060   14510 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-092150 dig +short host.docker.internal
	I1107 09:30:43.980403   14510 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:30:43.980574   14510 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:30:43.986055   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:44.048586   14510 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:30:44.048688   14510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:30:44.085041   14510 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:30:44.085057   14510 docker.go:543] Images already preloaded, skipping extraction
	I1107 09:30:44.085155   14510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:30:44.153151   14510 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:30:44.153180   14510 cache_images.go:84] Images are preloaded, skipping loading
	I1107 09:30:44.153315   14510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:30:44.291985   14510 cni.go:95] Creating CNI manager for ""
	I1107 09:30:44.292001   14510 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:30:44.292014   14510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:30:44.292030   14510 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-092150 NodeName:kubernetes-upgrade-092150 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:30:44.292140   14510 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-092150"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:30:44.292230   14510 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-092150 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:30:44.292301   14510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 09:30:44.300116   14510 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:30:44.300198   14510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 09:30:44.309478   14510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
	I1107 09:30:44.354762   14510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:30:44.367893   14510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I1107 09:30:44.381857   14510 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:30:44.385932   14510 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150 for IP: 192.168.67.2
	I1107 09:30:44.386078   14510 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:30:44.386153   14510 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:30:44.386260   14510 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.key
	I1107 09:30:44.386367   14510 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key.c7fa3a9e
	I1107 09:30:44.386448   14510 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.key
	I1107 09:30:44.386707   14510 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:30:44.386758   14510 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:30:44.386771   14510 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:30:44.386812   14510 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:30:44.386853   14510 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:30:44.386893   14510 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:30:44.386975   14510 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:30:44.387580   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 09:30:44.407069   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 09:30:44.425913   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 09:30:44.444466   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 09:30:44.465637   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:30:44.488257   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:30:44.508110   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:30:44.530450   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:30:44.550776   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:30:44.571033   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:30:44.593956   14510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:30:44.614246   14510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 09:30:44.631840   14510 ssh_runner.go:195] Run: openssl version
	I1107 09:30:44.637332   14510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:30:44.645776   14510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:30:44.649766   14510 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:30:44.649840   14510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:30:44.656882   14510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:30:44.665831   14510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:30:44.674176   14510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:30:44.678050   14510 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:30:44.678119   14510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:30:44.683411   14510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:30:44.690600   14510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:30:44.698671   14510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:30:44.702603   14510 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:30:44.702658   14510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:30:44.707799   14510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:30:44.717300   14510 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-092150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-092150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:30:44.717425   14510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:30:44.738736   14510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 09:30:44.746522   14510 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 09:30:44.746538   14510 kubeadm.go:627] restartCluster start
	I1107 09:30:44.746596   14510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 09:30:44.757959   14510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:30:44.758061   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:44.821020   14510 kubeconfig.go:92] found "kubernetes-upgrade-092150" server: "https://127.0.0.1:52353"
	I1107 09:30:44.821811   14510 kapi.go:59] client config for kubernetes-upgrade-092150: &rest.Config{Host:"https://127.0.0.1:52353", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:30:44.822367   14510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 09:30:44.830151   14510 api_server.go:165] Checking apiserver status ...
	I1107 09:30:44.830215   14510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:30:44.839132   14510 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12693/cgroup
	W1107 09:30:44.847396   14510 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12693/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:30:44.847469   14510 ssh_runner.go:195] Run: ls
	I1107 09:30:44.851426   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:47.541672   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 09:30:47.541692   14510 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:52353/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 09:30:47.804960   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:47.812437   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:47.812471   14510 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:52353/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:48.194692   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:48.200823   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:48.200845   14510 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:52353/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:48.623911   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:48.631463   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:48.631483   14510 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:52353/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:49.105701   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:49.112470   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 200:
	ok
	I1107 09:30:49.124842   14510 system_pods.go:86] 5 kube-system pods found
	I1107 09:30:49.124858   14510 system_pods.go:89] "etcd-kubernetes-upgrade-092150" [5c3663ba-50e5-4ebd-b0b2-9e02591dff2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 09:30:49.124864   14510 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-092150" [b07e7f5f-266f-45d7-b377-b1b0a6fb5fe0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 09:30:49.124874   14510 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-092150" [fd6400a9-117a-4dd4-8619-1dacaa89b93c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 09:30:49.124883   14510 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-092150" [24cd9d66-aa9c-4000-88c1-e716e67c2484] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 09:30:49.124889   14510 system_pods.go:89] "storage-provisioner" [2ff9c3c0-92ea-4fd9-8933-a4fdd557e4f3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1107 09:30:49.124896   14510 kubeadm.go:611] needs reconfigure: missing components: kube-dns, kube-proxy
	I1107 09:30:49.124902   14510 kubeadm.go:1114] stopping kube-system containers ...
	I1107 09:30:49.124983   14510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:30:49.151013   14510 docker.go:444] Stopping containers: [439753e8d5e4 17848f2267b1 d4457dd1e19f f14dbb0a0403 c82c1942e36a 90f530d6617b 572b81e6527c 3ec8568eb40b b63ab1eedbe4 8fa33caf9c66 9597bfd6cd81 238acaf2b684 36e291c04730 90a649959ba6 df6b6a4e10de e848c11e7a54 db5d02468a55]
	I1107 09:30:49.151106   14510 ssh_runner.go:195] Run: docker stop 439753e8d5e4 17848f2267b1 d4457dd1e19f f14dbb0a0403 c82c1942e36a 90f530d6617b 572b81e6527c 3ec8568eb40b b63ab1eedbe4 8fa33caf9c66 9597bfd6cd81 238acaf2b684 36e291c04730 90a649959ba6 df6b6a4e10de e848c11e7a54 db5d02468a55
	I1107 09:30:49.975658   14510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 09:30:50.068472   14510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:30:50.078215   14510 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  7 17:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  7 17:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Nov  7 17:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  7 17:30 /etc/kubernetes/scheduler.conf
	
	I1107 09:30:50.078286   14510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 09:30:50.086955   14510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 09:30:50.095482   14510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 09:30:50.103049   14510 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:30:50.103114   14510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 09:30:50.110436   14510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 09:30:50.143011   14510 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:30:50.143095   14510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 09:30:50.153233   14510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:30:50.161748   14510 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 09:30:50.161758   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:30:50.210655   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:30:50.757418   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:30:50.900780   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:30:50.951497   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:30:51.048789   14510 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:30:51.048863   14510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:30:51.563944   14510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:30:52.063969   14510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:30:52.075852   14510 api_server.go:71] duration metric: took 1.027036953s to wait for apiserver process to appear ...
	I1107 09:30:52.075872   14510 api_server.go:87] waiting for apiserver healthz status ...
	I1107 09:30:52.075882   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:55.004909   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 09:30:55.004928   14510 api_server.go:102] status: https://127.0.0.1:52353/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 09:30:55.506114   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:55.513985   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:30:55.514002   14510 api_server.go:102] status: https://127.0.0.1:52353/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:56.005352   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:56.010803   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:30:56.010823   14510 api_server.go:102] status: https://127.0.0.1:52353/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:30:56.505019   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:56.511405   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 200:
	ok
	I1107 09:30:56.518583   14510 api_server.go:140] control plane version: v1.25.3
	I1107 09:30:56.518596   14510 api_server.go:130] duration metric: took 4.442584957s to wait for apiserver health ...
	I1107 09:30:56.518602   14510 cni.go:95] Creating CNI manager for ""
	I1107 09:30:56.518607   14510 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:30:56.518616   14510 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 09:30:56.523606   14510 system_pods.go:59] 5 kube-system pods found
	I1107 09:30:56.523620   14510 system_pods.go:61] "etcd-kubernetes-upgrade-092150" [5c3663ba-50e5-4ebd-b0b2-9e02591dff2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 09:30:56.523628   14510 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-092150" [b07e7f5f-266f-45d7-b377-b1b0a6fb5fe0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 09:30:56.523637   14510 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-092150" [fd6400a9-117a-4dd4-8619-1dacaa89b93c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 09:30:56.523644   14510 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-092150" [24cd9d66-aa9c-4000-88c1-e716e67c2484] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 09:30:56.523649   14510 system_pods.go:61] "storage-provisioner" [2ff9c3c0-92ea-4fd9-8933-a4fdd557e4f3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1107 09:30:56.523655   14510 system_pods.go:74] duration metric: took 5.033263ms to wait for pod list to return data ...
	I1107 09:30:56.523674   14510 node_conditions.go:102] verifying NodePressure condition ...
	I1107 09:30:56.526323   14510 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:30:56.526336   14510 node_conditions.go:123] node cpu capacity is 6
	I1107 09:30:56.526350   14510 node_conditions.go:105] duration metric: took 2.669246ms to run NodePressure ...
	I1107 09:30:56.526362   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:30:56.641962   14510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 09:30:56.649156   14510 ops.go:34] apiserver oom_adj: -16
	I1107 09:30:56.649164   14510 kubeadm.go:631] restartCluster took 11.902265193s
	I1107 09:30:56.649172   14510 kubeadm.go:398] StartCluster complete in 11.931520382s
	I1107 09:30:56.649185   14510 settings.go:142] acquiring lock: {Name:mkacd69bfe5f4d7bab8b044c0ff487fe5c3f0cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:30:56.649280   14510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:30:56.649903   14510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:30:56.650656   14510 kapi.go:59] client config for kubernetes-upgrade-092150: &rest.Config{Host:"https://127.0.0.1:52353", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:30:56.653370   14510 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-092150" rescaled to 1
	I1107 09:30:56.653401   14510 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 09:30:56.653407   14510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 09:30:56.653429   14510 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I1107 09:30:56.675535   14510 out.go:177] * Verifying Kubernetes components...
	I1107 09:30:56.653557   14510 config.go:180] Loaded profile config "kubernetes-upgrade-092150": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:30:56.675594   14510 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-092150"
	I1107 09:30:56.675594   14510 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-092150"
	I1107 09:30:56.719146   14510 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 09:30:56.748731   14510 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-092150"
	I1107 09:30:56.748752   14510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-092150"
	W1107 09:30:56.748764   14510 addons.go:236] addon storage-provisioner should already be in state true
	I1107 09:30:56.748817   14510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:30:56.748867   14510 host.go:66] Checking if "kubernetes-upgrade-092150" exists ...
	I1107 09:30:56.749326   14510 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Status}}
	I1107 09:30:56.750468   14510 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Status}}
	I1107 09:30:56.764609   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:56.841553   14510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 09:30:56.818541   14510 kapi.go:59] client config for kubernetes-upgrade-092150: &rest.Config{Host:"https://127.0.0.1:52353", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubernetes-upgrade-092150/client.key", CAFile:"/Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2345ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 09:30:56.849384   14510 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-092150"
	W1107 09:30:56.863500   14510 addons.go:236] addon default-storageclass should already be in state true
	I1107 09:30:56.863545   14510 host.go:66] Checking if "kubernetes-upgrade-092150" exists ...
	I1107 09:30:56.863563   14510 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 09:30:56.863579   14510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 09:30:56.863700   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:56.865237   14510 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-092150 --format={{.State.Status}}
	I1107 09:30:56.871991   14510 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:30:56.872073   14510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:30:56.882444   14510 api_server.go:71] duration metric: took 229.017597ms to wait for apiserver process to appear ...
	I1107 09:30:56.882464   14510 api_server.go:87] waiting for apiserver healthz status ...
	I1107 09:30:56.882477   14510 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52353/healthz ...
	I1107 09:30:56.889144   14510 api_server.go:278] https://127.0.0.1:52353/healthz returned 200:
	ok
	I1107 09:30:56.890678   14510 api_server.go:140] control plane version: v1.25.3
	I1107 09:30:56.890688   14510 api_server.go:130] duration metric: took 8.219356ms to wait for apiserver health ...
	I1107 09:30:56.890693   14510 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 09:30:56.895268   14510 system_pods.go:59] 5 kube-system pods found
	I1107 09:30:56.895305   14510 system_pods.go:61] "etcd-kubernetes-upgrade-092150" [5c3663ba-50e5-4ebd-b0b2-9e02591dff2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 09:30:56.895313   14510 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-092150" [b07e7f5f-266f-45d7-b377-b1b0a6fb5fe0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 09:30:56.895329   14510 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-092150" [fd6400a9-117a-4dd4-8619-1dacaa89b93c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 09:30:56.895338   14510 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-092150" [24cd9d66-aa9c-4000-88c1-e716e67c2484] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 09:30:56.895344   14510 system_pods.go:61] "storage-provisioner" [2ff9c3c0-92ea-4fd9-8933-a4fdd557e4f3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1107 09:30:56.895349   14510 system_pods.go:74] duration metric: took 4.65147ms to wait for pod list to return data ...
	I1107 09:30:56.895356   14510 kubeadm.go:573] duration metric: took 241.93458ms to wait for : map[apiserver:true system_pods:true] ...
	I1107 09:30:56.895364   14510 node_conditions.go:102] verifying NodePressure condition ...
	I1107 09:30:56.898874   14510 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:30:56.898886   14510 node_conditions.go:123] node cpu capacity is 6
	I1107 09:30:56.898897   14510 node_conditions.go:105] duration metric: took 3.529128ms to run NodePressure ...
	I1107 09:30:56.898903   14510 start.go:217] waiting for startup goroutines ...
	I1107 09:30:56.929041   14510 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 09:30:56.929055   14510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 09:30:56.929143   14510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-092150
	I1107 09:30:56.929782   14510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52354 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:30:56.989386   14510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52354 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/kubernetes-upgrade-092150/id_rsa Username:docker}
	I1107 09:30:57.022238   14510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 09:30:57.096632   14510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 09:30:57.738437   14510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 09:30:57.776482   14510 addons.go:488] enableAddons completed in 1.123025493s
	I1107 09:30:57.776947   14510 ssh_runner.go:195] Run: rm -f paused
	I1107 09:30:57.815499   14510 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1107 09:30:57.836510   14510 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-092150" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:26:05 UTC, end at Mon 2022-11-07 17:30:59 UTC. --
	Nov 07 17:30:42 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:42.988827267Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 07 17:30:42 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:42.995147879Z" level=info msg="Loading containers: start."
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.078754956Z" level=info msg="ignoring event" container=b63ab1eedbe43d95c43661b478e6298bdaa52d13513f8bedc7ce58af0604b3ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.086283308Z" level=info msg="ignoring event" container=9597bfd6cd813395eb1a2b743f87c49532020bf53df7189998f05e155cc334f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.090864148Z" level=info msg="ignoring event" container=8fa33caf9c666abb9f12c745d9cdd9dc1e2884450da65547d383b8d1fe012ace module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.101322650Z" level=info msg="ignoring event" container=3ec8568eb40be3fa1b4d60d3dd90cfe01eb588d1513a2e81be4479e6800d4db4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.272837885Z" level=info msg="Removing stale sandbox b63702892ddcb53a7c121e17f4f2d223a75996d014d0e5e1040bc53ae364f2bd (9597bfd6cd813395eb1a2b743f87c49532020bf53df7189998f05e155cc334f3)"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.274052747Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 70d71c68e7110cde5fbe19d3c684eaba3b56973f23f4d75608ee5d11d60b62de 9289a54f3ae7bba200f8848a9683c350bf7e31d43ed543a756e12915b368f1d4], retrying...."
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.356433067Z" level=info msg="Removing stale sandbox c4512ba29ed681963de335dddcc3a650a7dd696fbce5c935eb631d8989fff7d6 (572b81e6527cf48a3520704dcfce0bbb91aa4d1dd46f674e4c6a03246e2f0935)"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.357767088Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 70d71c68e7110cde5fbe19d3c684eaba3b56973f23f4d75608ee5d11d60b62de 44b7894d062c36784a75c0bf7df67fc48a344661971d84dd300986ddbceff24e], retrying...."
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.439064846Z" level=info msg="Removing stale sandbox e183cfbac88b9f534cf9b5823499aebba8dd1361e66e3f45ed1c98c134230a7b (8fa33caf9c666abb9f12c745d9cdd9dc1e2884450da65547d383b8d1fe012ace)"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.440331542Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 70d71c68e7110cde5fbe19d3c684eaba3b56973f23f4d75608ee5d11d60b62de d84719248dae8c99795c5f5a2ec8ea204d6f7ad855b02774812cfc60bb794962], retrying...."
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.463197921Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.497187985Z" level=info msg="Loading containers: done."
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.506319941Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.506415816Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:30:43 kubernetes-upgrade-092150 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.529683568Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:30:43 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:43.535638831Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 17:30:49 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:49.266138765Z" level=info msg="ignoring event" container=f14dbb0a0403aa5bbe9afa33f55ac9efeb4e5d3d56cc9c1d5a1bb6202f4e8f5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:49 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:49.269468505Z" level=info msg="ignoring event" container=90f530d6617bc32a2c3a3d6ac7cb7244609579165317a9fa20a7ea0400dd6456 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:49 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:49.269494800Z" level=info msg="ignoring event" container=d4457dd1e19f336db3491acc92698e192747199648052affb623a14ec7e8de16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:49 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:49.269507406Z" level=info msg="ignoring event" container=c82c1942e36a9524b707a88dcf5261f60df7b01274a680b6d39c1cd5f764344e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:49 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:49.276350993Z" level=info msg="ignoring event" container=439753e8d5e4ee73c0119ade7f2e1244bf502b25c8d34bc6054fa3d7384def48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:30:49 kubernetes-upgrade-092150 dockerd[12052]: time="2022-11-07T17:30:49.964641670Z" level=info msg="ignoring event" container=17848f2267b1c66e328cd902fdf05b5480f2341b9d896016fb1b9f3b577d32fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	85a5d7f62c6dc       0346dbd74bcb9       8 seconds ago       Running             kube-apiserver            2                   2f0882cda9042
	860f86d6b9adb       a8a176a5d5d69       8 seconds ago       Running             etcd                      2                   38c9a321cf0c3
	a519438ba19b3       6d23ec0e8b87e       8 seconds ago       Running             kube-scheduler            2                   7cad580a1df7d
	9404b4cff7789       6039992312758       8 seconds ago       Running             kube-controller-manager   2                   3ca3128134f73
	439753e8d5e4e       a8a176a5d5d69       15 seconds ago      Exited              etcd                      1                   f14dbb0a0403a
	17848f2267b1c       0346dbd74bcb9       15 seconds ago      Exited              kube-apiserver            1                   90f530d6617bc
	3ec8568eb40be       6d23ec0e8b87e       18 seconds ago      Exited              kube-scheduler            1                   8fa33caf9c666
	b63ab1eedbe43       6039992312758       18 seconds ago      Exited              kube-controller-manager   1                   9597bfd6cd813
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-092150
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-092150
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
	                    minikube.k8s.io/name=kubernetes-upgrade-092150
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_07T09_30_35_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Nov 2022 17:30:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-092150
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Nov 2022 17:30:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Nov 2022 17:30:55 +0000   Mon, 07 Nov 2022 17:30:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Nov 2022 17:30:55 +0000   Mon, 07 Nov 2022 17:30:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Nov 2022 17:30:55 +0000   Mon, 07 Nov 2022 17:30:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Nov 2022 17:30:55 +0000   Mon, 07 Nov 2022 17:30:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-092150
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                da7668e4-0ad2-400d-ae96-5f53a018e277
	  Boot ID:                    d6bec1af-42e2-498c-8176-8915b52b45fe
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-092150                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                 kube-apiserver-kubernetes-upgrade-092150             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-092150    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-kubernetes-upgrade-092150             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  31s (x5 over 31s)  kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s (x5 over 31s)  kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s (x5 over 31s)  kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasSufficientPID
	  Normal  Starting                 24s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  24s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s                kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s                kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s                kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24s                kubelet  Node kubernetes-upgrade-092150 status is now: NodeReady
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-092150 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001714] FS-Cache: O-key=[8] '488dae0300000000'
	[  +0.001157] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.001539] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=00000000e1a1a6d0
	[  +0.001710] FS-Cache: N-key=[8] '488dae0300000000'
	[  +0.002207] FS-Cache: Duplicate cookie detected
	[  +0.001068] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001559] FS-Cache: O-cookie d=00000000966442c4{9p.inode} n=000000005ed6919f
	[  +0.001709] FS-Cache: O-key=[8] '488dae0300000000'
	[  +0.001169] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.001534] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=00000000b2ce95cf
	[  +0.001708] FS-Cache: N-key=[8] '488dae0300000000'
	[  +3.616838] FS-Cache: Duplicate cookie detected
	[  +0.001061] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001566] FS-Cache: O-cookie d=00000000966442c4{9p.inode} n=0000000097b2e942
	[  +0.001705] FS-Cache: O-key=[8] '478dae0300000000'
	[  +0.001158] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001518] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=00000000bf315956
	[  +0.001676] FS-Cache: N-key=[8] '478dae0300000000'
	[  +0.397688] FS-Cache: Duplicate cookie detected
	[  +0.001069] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.001560] FS-Cache: O-cookie d=00000000966442c4{9p.inode} n=0000000057a1aadc
	[  +0.001702] FS-Cache: O-key=[8] '648dae0300000000'
	[  +0.001156] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001498] FS-Cache: N-cookie d=00000000966442c4{9p.inode} n=0000000083dcca6d
	[  +0.001683] FS-Cache: N-key=[8] '648dae0300000000'
	
	* 
	* ==> etcd [439753e8d5e4] <==
	* {"level":"info","ts":"2022-11-07T17:30:44.297Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-07T17:30:44.297Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T17:30:44.297Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T17:30:45.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-11-07T17:30:45.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-11-07T17:30:45.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-11-07T17:30:45.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-11-07T17:30:45.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-11-07T17:30:45.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-11-07T17:30:45.990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-11-07T17:30:45.991Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-092150 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-07T17:30:45.991Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:30:45.991Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:30:45.991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-07T17:30:45.992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-07T17:30:45.992Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-11-07T17:30:45.993Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-11-07T17:30:49.202Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-07T17:30:49.202Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-092150","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/11/07 17:30:49 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/07 17:30:49 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-07T17:30:49.215Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-11-07T17:30:49.216Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T17:30:49.218Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T17:30:49.218Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-092150","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [860f86d6b9ad] <==
	* {"level":"info","ts":"2022-11-07T17:30:51.801Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-11-07T17:30:51.801Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-11-07T17:30:51.802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-11-07T17:30:51.802Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-11-07T17:30:51.802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:30:51.802Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:30:51.804Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-11-07T17:30:51.804Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T17:30:51.804Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-07T17:30:51.804Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-07T17:30:51.804Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-07T17:30:53.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-11-07T17:30:53.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-11-07T17:30:53.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-11-07T17:30:53.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-11-07T17:30:53.474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-11-07T17:30:53.475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-11-07T17:30:53.475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-11-07T17:30:53.477Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-092150 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-07T17:30:53.477Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:30:53.477Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:30:53.478Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-07T17:30:53.478Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-07T17:30:53.478Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-11-07T17:30:53.479Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:31:00 up  1:00,  0 users,  load average: 1.04, 0.93, 0.78
	Linux kubernetes-upgrade-092150 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [17848f2267b1] <==
	* I1107 17:30:49.206303       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	W1107 17:30:49.207067       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I1107 17:30:49.207686       1 controller.go:157] Shutting down quota evaluator
	I1107 17:30:49.207700       1 controller.go:157] Shutting down quota evaluator
	I1107 17:30:49.207722       1 controller.go:157] Shutting down quota evaluator
	I1107 17:30:49.207785       1 controller.go:176] quota evaluator worker shutdown
	I1107 17:30:49.207792       1 controller.go:176] quota evaluator worker shutdown
	I1107 17:30:49.207798       1 controller.go:176] quota evaluator worker shutdown
	I1107 17:30:49.207803       1 controller.go:176] quota evaluator worker shutdown
	I1107 17:30:49.207808       1 controller.go:176] quota evaluator worker shutdown
	W1107 17:30:49.208500       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [85a5d7f62c6d] <==
	* I1107 17:30:55.007252       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1107 17:30:55.007417       1 available_controller.go:491] Starting AvailableConditionController
	I1107 17:30:55.007442       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I1107 17:30:55.007457       1 autoregister_controller.go:141] Starting autoregister controller
	I1107 17:30:55.007460       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1107 17:30:55.007625       1 apf_controller.go:300] Starting API Priority and Fairness config controller
	I1107 17:30:55.007748       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1107 17:30:55.007774       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1107 17:30:55.046457       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1107 17:30:55.054205       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1107 17:30:55.082062       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1107 17:30:55.097896       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1107 17:30:55.108527       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1107 17:30:55.108887       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 17:30:55.108905       1 cache.go:39] Caches are synced for autoregister controller
	I1107 17:30:55.108889       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1107 17:30:55.109251       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 17:30:55.142805       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 17:30:55.820142       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 17:30:56.000721       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 17:30:56.596619       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1107 17:30:56.603140       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1107 17:30:56.621019       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1107 17:30:56.633567       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 17:30:56.637262       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [9404b4cff778] <==
	* I1107 17:30:58.684286       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1107 17:30:58.684325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1107 17:30:58.833695       1 controllermanager.go:603] Started "attachdetach"
	I1107 17:30:58.833775       1 attach_detach_controller.go:328] Starting attach detach controller
	I1107 17:30:58.833783       1 shared_informer.go:255] Waiting for caches to sync for attach detach
	I1107 17:30:58.883822       1 controllermanager.go:603] Started "endpointslicemirroring"
	I1107 17:30:58.883881       1 endpointslicemirroring_controller.go:216] Starting EndpointSliceMirroring controller
	I1107 17:30:58.883889       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring
	I1107 17:30:59.033522       1 controllermanager.go:603] Started "cronjob"
	I1107 17:30:59.033626       1 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
	I1107 17:30:59.033653       1 shared_informer.go:255] Waiting for caches to sync for cronjob
	I1107 17:30:59.184199       1 controllermanager.go:603] Started "clusterrole-aggregation"
	I1107 17:30:59.184262       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
	I1107 17:30:59.184268       1 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
	I1107 17:30:59.386189       1 controllermanager.go:603] Started "ttl-after-finished"
	I1107 17:30:59.386312       1 ttlafterfinished_controller.go:109] Starting TTL after finished controller
	I1107 17:30:59.386324       1 shared_informer.go:255] Waiting for caches to sync for TTL after finished
	I1107 17:30:59.422264       1 controllermanager.go:603] Started "statefulset"
	I1107 17:30:59.422382       1 stateful_set.go:152] Starting stateful set controller
	I1107 17:30:59.422388       1 shared_informer.go:255] Waiting for caches to sync for stateful set
	I1107 17:30:59.599020       1 controllermanager.go:603] Started "tokencleaner"
	I1107 17:30:59.599186       1 tokencleaner.go:118] Starting token cleaner controller
	I1107 17:30:59.599195       1 shared_informer.go:255] Waiting for caches to sync for token_cleaner
	I1107 17:30:59.599207       1 shared_informer.go:262] Caches are synced for token_cleaner
	I1107 17:30:59.601988       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [b63ab1eedbe4] <==
	* I1107 17:30:42.615910       1 serving.go:348] Generated self-signed cert in-memory
	I1107 17:30:42.869804       1 controllermanager.go:178] Version: v1.25.3
	I1107 17:30:42.869837       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:30:42.870805       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1107 17:30:42.870899       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1107 17:30:42.871289       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:30:42.871376       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-scheduler [3ec8568eb40b] <==
	* E1107 17:30:42.987616       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.987654       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.987687       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.987706       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988033       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.67.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988079       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.67.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988065       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988217       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988308       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988366       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988383       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988386       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988415       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988469       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988496       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988561       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988581       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988625       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1107 17:30:42.988718       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1107 17:30:42.988762       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	I1107 17:30:43.035789       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1107 17:30:43.035863       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1107 17:30:43.035958       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:30:43.035988       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1107 17:30:43.036507       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [a519438ba19b] <==
	* W1107 17:30:55.064090       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 17:30:55.064147       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1107 17:30:55.064105       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1107 17:30:55.064277       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1107 17:30:55.064453       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1107 17:30:55.064483       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W1107 17:30:55.064941       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 17:30:55.064982       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1107 17:30:55.065099       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 17:30:55.065186       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1107 17:30:55.065239       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1107 17:30:55.065272       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1107 17:30:55.065466       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 17:30:55.065525       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1107 17:30:55.065612       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 17:30:55.065642       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1107 17:30:55.065732       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1107 17:30:55.065746       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W1107 17:30:55.066048       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1107 17:30:55.066089       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1107 17:30:55.066535       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1107 17:30:55.066594       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1107 17:30:55.066720       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1107 17:30:55.066798       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	I1107 17:30:55.957242       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:26:05 UTC, end at Mon 2022-11-07 17:31:01 UTC. --
	Nov 07 17:30:52 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:52.947443   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.048074   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.148893   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.249197   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.350008   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.450207   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.551262   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.651453   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.752362   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.853099   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:53 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:53.954122   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.054524   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.155420   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.255819   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.357036   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.457614   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.558598   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.659665   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.760521   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.861677   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:54 kubernetes-upgrade-092150 kubelet[13382]: E1107 17:30:54.961968   13382 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-092150\" not found"
	Nov 07 17:30:55 kubernetes-upgrade-092150 kubelet[13382]: I1107 17:30:55.149237   13382 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-092150"
	Nov 07 17:30:55 kubernetes-upgrade-092150 kubelet[13382]: I1107 17:30:55.149342   13382 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-092150"
	Nov 07 17:30:55 kubernetes-upgrade-092150 kubelet[13382]: I1107 17:30:55.974429   13382 apiserver.go:52] "Watching apiserver"
	Nov 07 17:30:56 kubernetes-upgrade-092150 kubelet[13382]: I1107 17:30:56.067980   13382 reconciler.go:169] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-092150 -n kubernetes-upgrade-092150
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-092150 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-092150 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-092150 describe pod storage-provisioner: exit status 1 (50.900557ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-092150 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-092150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-092150
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-092150: (2.785888786s)
--- FAIL: TestKubernetesUpgrade (553.63s)

                                                
                                    
x
+
TestMissingContainerUpgrade (64.96s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1824705831.exe start -p missing-upgrade-092105 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1824705831.exe start -p missing-upgrade-092105 --memory=2200 --driver=docker : exit status 78 (48.869086134s)

                                                
                                                
-- stdout --
	! [missing-upgrade-092105] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-092105
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-092105" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 178.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 197.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 218.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 239.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 298.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 416.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 453.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 473.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 486.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:21:41.947646678 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-092105" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:21:53.091698983 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1824705831.exe start -p missing-upgrade-092105 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1824705831.exe start -p missing-upgrade-092105 --memory=2200 --driver=docker : exit status 70 (4.488420077s)

                                                
                                                
-- stdout --
	* [missing-upgrade-092105] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-092105
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-092105" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1824705831.exe start -p missing-upgrade-092105 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.1824705831.exe start -p missing-upgrade-092105 --memory=2200 --driver=docker : exit status 70 (4.089939113s)

                                                
                                                
-- stdout --
	* [missing-upgrade-092105] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-092105
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-092105" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2022-11-07 09:22:07.398902 -0800 PST m=+2234.734647113
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-092105
helpers_test.go:235: (dbg) docker inspect missing-upgrade-092105:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2f061372640400ce913cd4c54f2da2a2ab996cf8d16841fdf007e0f9cd87083",
	        "Created": "2022-11-07T17:21:50.143479641Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 142085,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:21:50.597996673Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/f2f061372640400ce913cd4c54f2da2a2ab996cf8d16841fdf007e0f9cd87083/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2f061372640400ce913cd4c54f2da2a2ab996cf8d16841fdf007e0f9cd87083/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2f061372640400ce913cd4c54f2da2a2ab996cf8d16841fdf007e0f9cd87083/hosts",
	        "LogPath": "/var/lib/docker/containers/f2f061372640400ce913cd4c54f2da2a2ab996cf8d16841fdf007e0f9cd87083/f2f061372640400ce913cd4c54f2da2a2ab996cf8d16841fdf007e0f9cd87083-json.log",
	        "Name": "/missing-upgrade-092105",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-092105:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b49a14cf17b6df6557319b7f570bf13c9322a41e6fce98325d5e78e79711711a-init/diff:/var/lib/docker/overlay2/ce5e11bfc992a1a8f6d985dac2da213fb12466da5bf4a846750e53e75ca3183d/diff:/var/lib/docker/overlay2/6d130a4743287998046b9d2870a4c3b1b72a54dc2be0dd13c60e6a09c7c6d1ab/diff:/var/lib/docker/overlay2/9e0fb86728306987a63c6c365925b606090799d462b09bb60358ac58d788d637/diff:/var/lib/docker/overlay2/91fc9d4187b857d7439368b0f8da43e64916b0898d95e6370c986c39d4476a24/diff:/var/lib/docker/overlay2/f68e41bc8e02432bc31112bab2d913106a4cccbcb0718ec701d12c47ce1a5e6b/diff:/var/lib/docker/overlay2/da0503bae01e0c2f1dcca1507753003e7fa4fab8b811ef4bd9f44b8df7810261/diff:/var/lib/docker/overlay2/eb8aa4c3ba69f570e899a8be5b6fe74f801b0f1881147bbd2eeb7c2ab24fe8a9/diff:/var/lib/docker/overlay2/4380dccd8f6d70b50817097ebe26188055e03da8bd2dbcd94d554a094a07eb8d/diff:/var/lib/docker/overlay2/18799f5eb3f65b38408a5d017386b46eb6cd3dc34d06e68c238a58d3613cb440/diff:/var/lib/docker/overlay2/290277
7b4b29e78962a2c1fba3462009d545930037f1e2e2b297f10095d650ba/diff:/var/lib/docker/overlay2/98a930db05510e1ab529d982ad487d7229a5391f411ee1bf9ca25bfddd6dd810/diff:/var/lib/docker/overlay2/909c11a5c7167fcb51e2180dac2c7233993b270265c034a1975211db9a03a8ab/diff:/var/lib/docker/overlay2/c16a51f54f38775409e86242c130cb2225e6b00bd48b894bb5100e32f56d00ca/diff:/var/lib/docker/overlay2/e8ffa0670460c44067365553ba50cb4acac0d10761dcb01e1cf31148251c540b/diff:/var/lib/docker/overlay2/ba4d5b5c688339adeb153b804f2999c19e5444d8da632f9ff13141b6fd5b1029/diff:/var/lib/docker/overlay2/126c7fcb83dbcfabdbebe29b6aa730b7ae685710d20a010ed79626a2db327da8/diff:/var/lib/docker/overlay2/90df5f99607695ad7b916b512f8bec4f605f0fadc8555f06dbfb4ee5bc0e5d52/diff:/var/lib/docker/overlay2/888a0498617efd125b3a4ff5a0ff12fe733ef971c2844d31903155060f6d99ae/diff:/var/lib/docker/overlay2/ca744951d704db9cc6afe8b68ef931777d66fee2e5f1977f89279dde940a8dc0/diff:/var/lib/docker/overlay2/7050445f9614af96ed42f0d0177701afc718172458c6a1b656c7cf4d7c03e026/diff:/var/lib/d
ocker/overlay2/f7d3f88486de80aa8b10c310182f2a33532c5e2722c2d933c8b36d68af65ab90/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b49a14cf17b6df6557319b7f570bf13c9322a41e6fce98325d5e78e79711711a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b49a14cf17b6df6557319b7f570bf13c9322a41e6fce98325d5e78e79711711a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b49a14cf17b6df6557319b7f570bf13c9322a41e6fce98325d5e78e79711711a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-092105",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-092105/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-092105",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-092105",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-092105",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b82d3f95c3f245ef140ddf59077f6bb614c13f9de3eb280a0fc9204c2aed835d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51984"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51985"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b82d3f95c3f2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "14b464851f350bb7a2a3ea95273f479be8c2a04940d44010ec76b52dcef5500e",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "dffd526401466b8f4ca8a2e3f88ae9d6ca71b5f97245b6f3f21da95307666e43",
	                    "EndpointID": "14b464851f350bb7a2a3ea95273f479be8c2a04940d44010ec76b52dcef5500e",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-092105 -n missing-upgrade-092105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-092105 -n missing-upgrade-092105: exit status 6 (383.63572ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:22:07.829196   12033 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-092105" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-092105" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-092105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-092105
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-092105: (2.275086829s)
--- FAIL: TestMissingContainerUpgrade (64.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3173493047.exe start -p stopped-upgrade-092210 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3173493047.exe start -p stopped-upgrade-092210 --memory=2200 --vm-driver=docker : exit status 70 (35.157859709s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-092210] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2004868902
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:22:27.064986707 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-092210" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:22:43.609985560 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-092210", then "minikube start -p stopped-upgrade-092210 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.77 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 159.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 182.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 204.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 226.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 270.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 329.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 349.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 370.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 410.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 452.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 532.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:22:43.609985560 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3173493047.exe start -p stopped-upgrade-092210 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3173493047.exe start -p stopped-upgrade-092210 --memory=2200 --vm-driver=docker : exit status 70 (4.35615867s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-092210] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1032168667
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-092210" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3173493047.exe start -p stopped-upgrade-092210 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3173493047.exe start -p stopped-upgrade-092210 --memory=2200 --vm-driver=docker : exit status 70 (4.458181257s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-092210] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3922431449
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-092210" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (46.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (251.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-093929 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E1107 09:39:30.065424    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:39:32.625661    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:39:37.745940    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:39:43.570863    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:39:47.986345    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:40:08.467773    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-093929 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m11.469736308s)

                                                
                                                
-- stdout --
	* [old-k8s-version-093929] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-093929 in cluster old-k8s-version-093929
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 09:39:29.294575   16797 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:39:29.295284   16797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:39:29.295295   16797 out.go:309] Setting ErrFile to fd 2...
	I1107 09:39:29.295303   16797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:39:29.295565   16797 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:39:29.296676   16797 out.go:303] Setting JSON to false
	I1107 09:39:29.319106   16797 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4144,"bootTime":1667838625,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 09:39:29.319217   16797 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 09:39:29.340985   16797 out.go:177] * [old-k8s-version-093929] minikube v1.28.0 on Darwin 13.0
	I1107 09:39:29.361758   16797 notify.go:220] Checking for updates...
	I1107 09:39:29.382717   16797 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 09:39:29.403611   16797 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:39:29.424805   16797 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 09:39:29.445721   16797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 09:39:29.466656   16797 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 09:39:29.488190   16797 config.go:180] Loaded profile config "kubenet-092103": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:39:29.488261   16797 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 09:39:29.558862   16797 docker.go:137] docker version: linux-20.10.20
	I1107 09:39:29.559061   16797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:39:29.728617   16797 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:59 SystemTime:2022-11-07 17:39:29.622855965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:39:29.750545   16797 out.go:177] * Using the docker driver based on user configuration
	I1107 09:39:29.771087   16797 start.go:282] selected driver: docker
	I1107 09:39:29.771102   16797 start.go:808] validating driver "docker" against <nil>
	I1107 09:39:29.771131   16797 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 09:39:29.773934   16797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:39:29.939195   16797 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:59 SystemTime:2022-11-07 17:39:29.83789566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:39:29.939327   16797 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 09:39:29.939473   16797 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 09:39:29.961125   16797 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 09:39:29.981864   16797 cni.go:95] Creating CNI manager for ""
	I1107 09:39:29.981881   16797 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:39:29.981892   16797 start_flags.go:317] config:
	{Name:old-k8s-version-093929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:39:30.002660   16797 out.go:177] * Starting control plane node old-k8s-version-093929 in cluster old-k8s-version-093929
	I1107 09:39:30.044914   16797 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:39:30.065685   16797 out.go:177] * Pulling base image ...
	I1107 09:39:30.107892   16797 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:39:30.107921   16797 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:39:30.107957   16797 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 09:39:30.107969   16797 cache.go:57] Caching tarball of preloaded images
	I1107 09:39:30.108108   16797 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:39:30.108120   16797 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1107 09:39:30.108680   16797 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/config.json ...
	I1107 09:39:30.108751   16797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/config.json: {Name:mk3182e4739a1c9eda1ab5059c366bc95c85f31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:39:30.186072   16797 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:39:30.186103   16797 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:39:30.186117   16797 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:39:30.186187   16797 start.go:364] acquiring machines lock for old-k8s-version-093929: {Name:mk1219dfd9d2598aff29791b7c2ffd86213e8a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:39:30.186395   16797 start.go:368] acquired machines lock for "old-k8s-version-093929" in 192.365µs
	I1107 09:39:30.186440   16797 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-093929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 09:39:30.186524   16797 start.go:125] createHost starting for "" (driver="docker")
	I1107 09:39:30.232935   16797 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 09:39:30.233256   16797 start.go:159] libmachine.API.Create for "old-k8s-version-093929" (driver="docker")
	I1107 09:39:30.233297   16797 client.go:168] LocalClient.Create starting
	I1107 09:39:30.233463   16797 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem
	I1107 09:39:30.233535   16797 main.go:134] libmachine: Decoding PEM data...
	I1107 09:39:30.233555   16797 main.go:134] libmachine: Parsing certificate...
	I1107 09:39:30.233627   16797 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem
	I1107 09:39:30.233676   16797 main.go:134] libmachine: Decoding PEM data...
	I1107 09:39:30.233688   16797 main.go:134] libmachine: Parsing certificate...
	I1107 09:39:30.254437   16797 cli_runner.go:164] Run: docker network inspect old-k8s-version-093929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 09:39:30.316615   16797 cli_runner.go:211] docker network inspect old-k8s-version-093929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 09:39:30.316789   16797 network_create.go:272] running [docker network inspect old-k8s-version-093929] to gather additional debugging logs...
	I1107 09:39:30.316810   16797 cli_runner.go:164] Run: docker network inspect old-k8s-version-093929
	W1107 09:39:30.383555   16797 cli_runner.go:211] docker network inspect old-k8s-version-093929 returned with exit code 1
	I1107 09:39:30.383586   16797 network_create.go:275] error running [docker network inspect old-k8s-version-093929]: docker network inspect old-k8s-version-093929: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-093929
	I1107 09:39:30.383600   16797 network_create.go:277] output of [docker network inspect old-k8s-version-093929]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-093929
	
	** /stderr **
	I1107 09:39:30.383728   16797 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 09:39:30.447035   16797 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00017c420] misses:0}
	I1107 09:39:30.447085   16797 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:39:30.447107   16797 network_create.go:115] attempt to create docker network old-k8s-version-093929 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 09:39:30.447218   16797 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-093929 old-k8s-version-093929
	W1107 09:39:30.508705   16797 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-093929 old-k8s-version-093929 returned with exit code 1
	W1107 09:39:30.508751   16797 network_create.go:107] failed to create docker network old-k8s-version-093929 192.168.49.0/24, will retry: subnet is taken
	I1107 09:39:30.509099   16797 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00017c420] amended:false}} dirty:map[] misses:0}
	I1107 09:39:30.509117   16797 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:39:30.509367   16797 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00017c420] amended:true}} dirty:map[192.168.49.0:0xc00017c420 192.168.58.0:0xc00061e1f8] misses:0}
	I1107 09:39:30.509382   16797 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:39:30.509395   16797 network_create.go:115] attempt to create docker network old-k8s-version-093929 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 09:39:30.509490   16797 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-093929 old-k8s-version-093929
	W1107 09:39:30.573114   16797 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-093929 old-k8s-version-093929 returned with exit code 1
	W1107 09:39:30.573148   16797 network_create.go:107] failed to create docker network old-k8s-version-093929 192.168.58.0/24, will retry: subnet is taken
	I1107 09:39:30.573453   16797 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00017c420] amended:true}} dirty:map[192.168.49.0:0xc00017c420 192.168.58.0:0xc00061e1f8] misses:1}
	I1107 09:39:30.573475   16797 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:39:30.573689   16797 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00017c420] amended:true}} dirty:map[192.168.49.0:0xc00017c420 192.168.58.0:0xc00061e1f8 192.168.67.0:0xc00061e230] misses:1}
	I1107 09:39:30.573705   16797 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:39:30.573712   16797 network_create.go:115] attempt to create docker network old-k8s-version-093929 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 09:39:30.573805   16797 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-093929 old-k8s-version-093929
	W1107 09:39:30.637052   16797 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-093929 old-k8s-version-093929 returned with exit code 1
	W1107 09:39:30.637090   16797 network_create.go:107] failed to create docker network old-k8s-version-093929 192.168.67.0/24, will retry: subnet is taken
	I1107 09:39:30.637373   16797 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00017c420] amended:true}} dirty:map[192.168.49.0:0xc00017c420 192.168.58.0:0xc00061e1f8 192.168.67.0:0xc00061e230] misses:2}
	I1107 09:39:30.637392   16797 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:39:30.637641   16797 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00017c420] amended:true}} dirty:map[192.168.49.0:0xc00017c420 192.168.58.0:0xc00061e1f8 192.168.67.0:0xc00061e230 192.168.76.0:0xc00019ce20] misses:2}
	I1107 09:39:30.637659   16797 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 09:39:30.637669   16797 network_create.go:115] attempt to create docker network old-k8s-version-093929 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1107 09:39:30.637771   16797 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-093929 old-k8s-version-093929
	I1107 09:39:31.004328   16797 network_create.go:99] docker network old-k8s-version-093929 192.168.76.0/24 created
	I1107 09:39:31.004386   16797 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-093929" container
	I1107 09:39:31.004539   16797 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 09:39:31.069652   16797 cli_runner.go:164] Run: docker volume create old-k8s-version-093929 --label name.minikube.sigs.k8s.io=old-k8s-version-093929 --label created_by.minikube.sigs.k8s.io=true
	I1107 09:39:31.131936   16797 oci.go:103] Successfully created a docker volume old-k8s-version-093929
	I1107 09:39:31.132096   16797 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-093929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-093929 --entrypoint /usr/bin/test -v old-k8s-version-093929:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 09:39:32.147298   16797 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-093929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-093929 --entrypoint /usr/bin/test -v old-k8s-version-093929:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib: (1.015096355s)
	I1107 09:39:32.147323   16797 oci.go:107] Successfully prepared a docker volume old-k8s-version-093929
	I1107 09:39:32.147341   16797 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:39:32.147361   16797 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 09:39:32.147470   16797 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-093929:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 09:39:36.591241   16797 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-093929:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.44357068s)
	I1107 09:39:36.591264   16797 kic.go:188] duration metric: took 4.443770 seconds to extract preloaded images to volume
	I1107 09:39:36.591399   16797 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 09:39:36.741340   16797 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-093929 --name old-k8s-version-093929 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-093929 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-093929 --network old-k8s-version-093929 --ip 192.168.76.2 --volume old-k8s-version-093929:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 09:39:37.142513   16797 cli_runner.go:164] Run: docker container inspect old-k8s-version-093929 --format={{.State.Running}}
	I1107 09:39:37.209371   16797 cli_runner.go:164] Run: docker container inspect old-k8s-version-093929 --format={{.State.Status}}
	I1107 09:39:37.279011   16797 cli_runner.go:164] Run: docker exec old-k8s-version-093929 stat /var/lib/dpkg/alternatives/iptables
	I1107 09:39:37.403024   16797 oci.go:144] the created container "old-k8s-version-093929" has a running status.
	I1107 09:39:37.403061   16797 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa...
	I1107 09:39:37.811702   16797 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 09:39:37.925518   16797 cli_runner.go:164] Run: docker container inspect old-k8s-version-093929 --format={{.State.Status}}
	I1107 09:39:37.986948   16797 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 09:39:37.986978   16797 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-093929 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 09:39:38.141719   16797 cli_runner.go:164] Run: docker container inspect old-k8s-version-093929 --format={{.State.Status}}
	I1107 09:39:38.204267   16797 machine.go:88] provisioning docker machine ...
	I1107 09:39:38.204313   16797 ubuntu.go:169] provisioning hostname "old-k8s-version-093929"
	I1107 09:39:38.204446   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:38.266774   16797 main.go:134] libmachine: Using SSH client type: native
	I1107 09:39:38.266976   16797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53729 <nil> <nil>}
	I1107 09:39:38.266995   16797 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-093929 && echo "old-k8s-version-093929" | sudo tee /etc/hostname
	I1107 09:39:38.394281   16797 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-093929
	
	I1107 09:39:38.394419   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:38.457784   16797 main.go:134] libmachine: Using SSH client type: native
	I1107 09:39:38.457970   16797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53729 <nil> <nil>}
	I1107 09:39:38.457984   16797 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-093929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-093929/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-093929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:39:38.578951   16797 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:39:38.578972   16797 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:39:38.578996   16797 ubuntu.go:177] setting up certificates
	I1107 09:39:38.579006   16797 provision.go:83] configureAuth start
	I1107 09:39:38.579104   16797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093929
	I1107 09:39:38.641427   16797 provision.go:138] copyHostCerts
	I1107 09:39:38.641534   16797 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:39:38.641544   16797 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:39:38.641679   16797 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:39:38.641922   16797 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:39:38.641929   16797 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:39:38.642000   16797 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:39:38.642174   16797 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:39:38.642181   16797 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:39:38.642256   16797 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:39:38.642417   16797 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-093929 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-093929]
	I1107 09:39:39.024319   16797 provision.go:172] copyRemoteCerts
	I1107 09:39:39.024402   16797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:39:39.024467   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:39.085036   16797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53729 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:39:39.170654   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:39:39.189694   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1107 09:39:39.211372   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 09:39:39.230313   16797 provision.go:86] duration metric: configureAuth took 651.275664ms
	I1107 09:39:39.230327   16797 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:39:39.230478   16797 config.go:180] Loaded profile config "old-k8s-version-093929": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1107 09:39:39.230552   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:39.292990   16797 main.go:134] libmachine: Using SSH client type: native
	I1107 09:39:39.293157   16797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53729 <nil> <nil>}
	I1107 09:39:39.293174   16797 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:39:39.410412   16797 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:39:39.410433   16797 ubuntu.go:71] root file system type: overlay
	I1107 09:39:39.410600   16797 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:39:39.410710   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:39.472574   16797 main.go:134] libmachine: Using SSH client type: native
	I1107 09:39:39.472737   16797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53729 <nil> <nil>}
	I1107 09:39:39.472791   16797 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:39:39.601064   16797 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:39:39.601193   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:39.667512   16797 main.go:134] libmachine: Using SSH client type: native
	I1107 09:39:39.667703   16797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53729 <nil> <nil>}
	I1107 09:39:39.667716   16797 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:39:40.352477   16797 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:39:39.610077794 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 09:39:40.352498   16797 machine.go:91] provisioned docker machine in 2.148145407s
	I1107 09:39:40.352505   16797 client.go:171] LocalClient.Create took 10.118899449s
	I1107 09:39:40.352545   16797 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-093929" took 10.118964237s
	I1107 09:39:40.352555   16797 start.go:300] post-start starting for "old-k8s-version-093929" (driver="docker")
	I1107 09:39:40.352560   16797 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:39:40.352637   16797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:39:40.352707   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:40.412551   16797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53729 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:39:40.498757   16797 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:39:40.502687   16797 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:39:40.502703   16797 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:39:40.502711   16797 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:39:40.502718   16797 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:39:40.502728   16797 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:39:40.502828   16797 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:39:40.503011   16797 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:39:40.503234   16797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:39:40.511671   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:39:40.531929   16797 start.go:303] post-start completed in 179.359398ms
	I1107 09:39:40.532576   16797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093929
	I1107 09:39:40.593073   16797 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/config.json ...
	I1107 09:39:40.593530   16797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:39:40.593601   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:40.653528   16797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53729 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:39:40.735516   16797 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:39:40.740642   16797 start.go:128] duration metric: createHost completed in 10.553794417s
	I1107 09:39:40.740663   16797 start.go:83] releasing machines lock for "old-k8s-version-093929", held for 10.553938553s
	I1107 09:39:40.740781   16797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093929
	I1107 09:39:40.803569   16797 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1107 09:39:40.803576   16797 ssh_runner.go:195] Run: systemctl --version
	I1107 09:39:40.803656   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:40.803682   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:40.870353   16797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53729 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:39:40.893026   16797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53729 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:39:41.195442   16797 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:39:41.206200   16797 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:39:41.206267   16797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:39:41.215858   16797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:39:41.229445   16797 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:39:41.301706   16797 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:39:41.376325   16797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:39:41.443959   16797 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:39:41.661549   16797 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:39:41.697610   16797 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:39:41.774056   16797 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1107 09:39:41.774160   16797 cli_runner.go:164] Run: docker exec -t old-k8s-version-093929 dig +short host.docker.internal
	I1107 09:39:41.898022   16797 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:39:41.898146   16797 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:39:41.902375   16797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:39:41.912300   16797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:39:41.972044   16797 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:39:41.972135   16797 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:39:41.998285   16797 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:39:41.998304   16797 docker.go:543] Images already preloaded, skipping extraction
	I1107 09:39:41.998408   16797 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:39:42.024191   16797 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:39:42.024208   16797 cache_images.go:84] Images are preloaded, skipping loading
	I1107 09:39:42.024330   16797 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:39:42.104706   16797 cni.go:95] Creating CNI manager for ""
	I1107 09:39:42.104725   16797 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:39:42.104735   16797 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:39:42.104753   16797 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-093929 NodeName:old-k8s-version-093929 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:39:42.104872   16797 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-093929"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-093929
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:39:42.104955   16797 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-093929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:39:42.105034   16797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1107 09:39:42.114709   16797 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:39:42.114799   16797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 09:39:42.122804   16797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1107 09:39:42.139095   16797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:39:42.153565   16797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1107 09:39:42.168078   16797 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:39:42.172723   16797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:39:42.183211   16797 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929 for IP: 192.168.76.2
	I1107 09:39:42.183332   16797 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:39:42.183387   16797 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:39:42.183441   16797 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/client.key
	I1107 09:39:42.183466   16797 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/client.crt with IP's: []
	I1107 09:39:42.418044   16797 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/client.crt ...
	I1107 09:39:42.418058   16797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/client.crt: {Name:mk8ef399bad7ce9aaf7970946c30aa5f7ca1ad37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:39:42.418398   16797 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/client.key ...
	I1107 09:39:42.418407   16797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/client.key: {Name:mk8bbcf412935328080da50c8fe235bd3b5a5c63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:39:42.418639   16797 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key.31bdca25
	I1107 09:39:42.418659   16797 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 09:39:42.588578   16797 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.crt.31bdca25 ...
	I1107 09:39:42.588597   16797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.crt.31bdca25: {Name:mk2d15f7969e8ffb5ebdee725954a904bdb9c391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:39:42.588907   16797 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key.31bdca25 ...
	I1107 09:39:42.588916   16797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key.31bdca25: {Name:mk5f233418d28d98d468a24d297b2ef233cbcef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:39:42.589132   16797 certs.go:320] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.crt
	I1107 09:39:42.589306   16797 certs.go:324] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key
	I1107 09:39:42.589484   16797 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.key
	I1107 09:39:42.589503   16797 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.crt with IP's: []
	I1107 09:39:42.712266   16797 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.crt ...
	I1107 09:39:42.712281   16797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.crt: {Name:mkf86b6770a9d78879917fdd9adac9dd0be3c3fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:39:42.712611   16797 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.key ...
	I1107 09:39:42.712619   16797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.key: {Name:mk2d1b822a728b1c67539efc292c7a9018bbc640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:39:42.713030   16797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:39:42.713081   16797 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:39:42.713096   16797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:39:42.713136   16797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:39:42.713172   16797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:39:42.713205   16797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:39:42.713294   16797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:39:42.713795   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 09:39:42.733234   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 09:39:42.755635   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 09:39:42.776103   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 09:39:42.796447   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:39:42.814945   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:39:42.832584   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:39:42.850679   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:39:42.868469   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:39:42.885714   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:39:42.904217   16797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:39:42.922116   16797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 09:39:42.940565   16797 ssh_runner.go:195] Run: openssl version
	I1107 09:39:42.947959   16797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:39:42.957983   16797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:39:42.962302   16797 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:39:42.962376   16797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:39:42.968618   16797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:39:42.976852   16797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:39:42.986848   16797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:39:42.991807   16797 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:39:42.991861   16797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:39:42.997453   16797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:39:43.005506   16797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:39:43.013465   16797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:39:43.017559   16797 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:39:43.017631   16797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:39:43.023100   16797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:39:43.031118   16797 kubeadm.go:396] StartCluster: {Name:old-k8s-version-093929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:39:43.031316   16797 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:39:43.055774   16797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 09:39:43.063653   16797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:39:43.070909   16797 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 09:39:43.070964   16797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:39:43.078295   16797 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 09:39:43.078323   16797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 09:39:43.125609   16797 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1107 09:39:43.125672   16797 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 09:39:43.442494   16797 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 09:39:43.442586   16797 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 09:39:43.442663   16797 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 09:39:43.675520   16797 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:39:43.677488   16797 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:39:43.684348   16797 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1107 09:39:43.749355   16797 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:39:43.774724   16797 out.go:204]   - Generating certificates and keys ...
	I1107 09:39:43.774824   16797 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 09:39:43.774913   16797 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 09:39:44.010627   16797 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 09:39:44.292525   16797 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 09:39:44.438015   16797 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 09:39:44.594144   16797 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 09:39:44.676422   16797 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 09:39:44.676555   16797 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-093929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1107 09:39:44.824910   16797 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 09:39:44.825016   16797 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-093929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1107 09:39:45.184530   16797 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 09:39:45.301547   16797 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 09:39:45.713921   16797 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 09:39:45.714097   16797 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:39:45.951441   16797 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 09:39:46.063115   16797 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 09:39:46.229421   16797 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:39:46.427356   16797 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:39:46.428408   16797 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:39:46.450130   16797 out.go:204]   - Booting up control plane ...
	I1107 09:39:46.450269   16797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:39:46.450401   16797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:39:46.450527   16797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:39:46.450673   16797 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:39:46.450955   16797 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 09:40:26.409481   16797 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:40:26.410485   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:40:26.410763   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:40:31.407863   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:40:31.408086   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:40:41.402851   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:40:41.403135   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:41:01.390758   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:41:01.390970   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:41:41.363742   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:41:41.364088   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:41:41.364111   16797 kubeadm.go:317] 
	I1107 09:41:41.364153   16797 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:41:41.364194   16797 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:41:41.364206   16797 kubeadm.go:317] 
	I1107 09:41:41.364234   16797 kubeadm.go:317] This error is likely caused by:
	I1107 09:41:41.364257   16797 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:41:41.364329   16797 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:41:41.364333   16797 kubeadm.go:317] 
	I1107 09:41:41.364426   16797 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:41:41.364462   16797 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:41:41.364501   16797 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:41:41.364509   16797 kubeadm.go:317] 
	I1107 09:41:41.364603   16797 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:41:41.364773   16797 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:41:41.364919   16797 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:41:41.364973   16797 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:41:41.365026   16797 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:41:41.365049   16797 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:41:41.372140   16797 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:41:41.372311   16797 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:41:41.372442   16797 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:41:41.372611   16797 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:41:41.372790   16797 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1107 09:41:41.372939   16797 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-093929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-093929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-093929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-093929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 09:41:41.372971   16797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1107 09:41:41.866188   16797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:41:41.876560   16797 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 09:41:41.876627   16797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:41:41.884751   16797 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 09:41:41.884777   16797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 09:41:41.935564   16797 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1107 09:41:41.935644   16797 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 09:41:42.253427   16797 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 09:41:42.253525   16797 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 09:41:42.253618   16797 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 09:41:42.516407   16797 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:41:42.516983   16797 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:41:42.524233   16797 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1107 09:41:42.611493   16797 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:41:42.645355   16797 out.go:204]   - Generating certificates and keys ...
	I1107 09:41:42.645471   16797 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 09:41:42.645564   16797 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 09:41:42.645649   16797 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 09:41:42.645705   16797 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 09:41:42.645798   16797 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 09:41:42.645850   16797 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 09:41:42.645911   16797 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 09:41:42.646023   16797 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 09:41:42.646152   16797 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 09:41:42.646233   16797 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 09:41:42.646272   16797 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 09:41:42.646327   16797 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:41:42.678270   16797 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 09:41:42.759741   16797 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 09:41:42.887811   16797 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:41:43.146851   16797 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:41:43.147440   16797 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:41:43.170075   16797 out.go:204]   - Booting up control plane ...
	I1107 09:41:43.170198   16797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:41:43.170272   16797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:41:43.170349   16797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:41:43.170439   16797 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:41:43.170606   16797 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 09:42:23.131063   16797 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:42:23.132476   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:42:23.132684   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:42:28.131112   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:42:28.131317   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:42:38.125429   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:42:38.125711   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:42:58.113213   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:42:58.113442   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:43:38.086582   16797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:43:38.086735   16797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:43:38.086749   16797 kubeadm.go:317] 
	I1107 09:43:38.086798   16797 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:43:38.086829   16797 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:43:38.086835   16797 kubeadm.go:317] 
	I1107 09:43:38.086861   16797 kubeadm.go:317] This error is likely caused by:
	I1107 09:43:38.086911   16797 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:43:38.087007   16797 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:43:38.087017   16797 kubeadm.go:317] 
	I1107 09:43:38.087108   16797 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:43:38.087141   16797 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:43:38.087164   16797 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:43:38.087175   16797 kubeadm.go:317] 
	I1107 09:43:38.087262   16797 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:43:38.087352   16797 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:43:38.087427   16797 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:43:38.087469   16797 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:43:38.087523   16797 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:43:38.087551   16797 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:43:38.090625   16797 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:43:38.090735   16797 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:43:38.090823   16797 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:43:38.090892   16797 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:43:38.090945   16797 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 09:43:38.090971   16797 kubeadm.go:398] StartCluster complete in 3m55.052809114s
	I1107 09:43:38.091072   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:43:38.113203   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.113215   16797 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:43:38.113321   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:43:38.137595   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.137608   16797 logs.go:276] No container was found matching "etcd"
	I1107 09:43:38.137691   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:43:38.160016   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.160029   16797 logs.go:276] No container was found matching "coredns"
	I1107 09:43:38.160118   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:43:38.181846   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.181857   16797 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:43:38.181938   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:43:38.202852   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.202864   16797 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:43:38.202968   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:43:38.225448   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.225459   16797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:43:38.225541   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:43:38.250549   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.250561   16797 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:43:38.250641   16797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:43:38.274705   16797 logs.go:274] 0 containers: []
	W1107 09:43:38.274718   16797 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:43:38.274727   16797 logs.go:123] Gathering logs for Docker ...
	I1107 09:43:38.274737   16797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:43:38.291219   16797 logs.go:123] Gathering logs for container status ...
	I1107 09:43:38.291233   16797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:43:40.340238   16797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048932584s)
	I1107 09:43:40.340366   16797 logs.go:123] Gathering logs for kubelet ...
	I1107 09:43:40.340373   16797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:43:40.378369   16797 logs.go:123] Gathering logs for dmesg ...
	I1107 09:43:40.378383   16797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:43:40.390656   16797 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:43:40.390668   16797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:43:40.444860   16797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1107 09:43:40.444884   16797 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 09:43:40.444906   16797 out.go:239] * 
	* 
	W1107 09:43:40.445010   16797 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:43:40.445029   16797 out.go:239] * 
	* 
	W1107 09:43:40.445676   16797 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:43:40.542433   16797 out.go:177] 
	W1107 09:43:40.602831   16797 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:43:40.603010   16797 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 09:43:40.603132   16797 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 09:43:40.645300   16797 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-093929 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-093929
helpers_test.go:235: (dbg) docker inspect old-k8s-version-093929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd",
	        "Created": "2022-11-07T17:39:36.809249754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:39:37.143637295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd-json.log",
	        "Name": "/old-k8s-version-093929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-093929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093929",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093929",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df36e65f1c697018769e2cb073b4d1ab9327d2cbebd6626fc0b46fa243b9ed92",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53729"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53730"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53731"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53732"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53733"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df36e65f1c69",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-093929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "50811d3bbfa7",
	                        "old-k8s-version-093929"
	                    ],
	                    "NetworkID": "85b1c6253454469ed38e54ce96d4baef11c9b5b3afd90032e806121a14971f03",
	                    "EndpointID": "38533a1a1835e26950d13a17e03943cf6a91601ddc3f8e93c9594d62c6d8df36",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 6 (396.631422ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:43:41.183997   17561 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-093929" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-093929" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (251.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (58.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110965257s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1107 09:40:37.118404    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.10267744s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 09:40:38.292587    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110519638s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1107 09:40:49.431442    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110965569s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.097389611s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1107 09:41:05.495713    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.112581505s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 09:41:12.397468    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:12.403086    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:12.413764    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:12.434891    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:12.475886    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:12.556955    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:12.717559    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:13.038302    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:13.680628    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:14.960949    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:17.523154    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1107 09:41:22.644402    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.112159298s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (58.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-093929 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-093929 create -f testdata/busybox.yaml: exit status 1 (35.162447ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-093929" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-093929 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-093929
helpers_test.go:235: (dbg) docker inspect old-k8s-version-093929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd",
	        "Created": "2022-11-07T17:39:36.809249754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:39:37.143637295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd-json.log",
	        "Name": "/old-k8s-version-093929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-093929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093929",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093929",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df36e65f1c697018769e2cb073b4d1ab9327d2cbebd6626fc0b46fa243b9ed92",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53729"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53730"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53731"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53732"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53733"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df36e65f1c69",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-093929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "50811d3bbfa7",
	                        "old-k8s-version-093929"
	                    ],
	                    "NetworkID": "85b1c6253454469ed38e54ce96d4baef11c9b5b3afd90032e806121a14971f03",
	                    "EndpointID": "38533a1a1835e26950d13a17e03943cf6a91601ddc3f8e93c9594d62c6d8df36",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 6 (400.507655ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:43:41.678995   17574 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-093929" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-093929" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-093929
helpers_test.go:235: (dbg) docker inspect old-k8s-version-093929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd",
	        "Created": "2022-11-07T17:39:36.809249754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:39:37.143637295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd-json.log",
	        "Name": "/old-k8s-version-093929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-093929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093929",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093929",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df36e65f1c697018769e2cb073b4d1ab9327d2cbebd6626fc0b46fa243b9ed92",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53729"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53730"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53731"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53732"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53733"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df36e65f1c69",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-093929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "50811d3bbfa7",
	                        "old-k8s-version-093929"
	                    ],
	                    "NetworkID": "85b1c6253454469ed38e54ce96d4baef11c9b5b3afd90032e806121a14971f03",
	                    "EndpointID": "38533a1a1835e26950d13a17e03943cf6a91601ddc3f8e93c9594d62c6d8df36",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 6 (409.695583ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:43:42.148249   17586 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-093929" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-093929" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-093929 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1107 09:43:42.590599    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:43:49.343016    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:43:56.252606    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:44:05.962560    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:05.967937    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:05.980133    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:06.000332    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:06.041173    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:06.121658    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:06.281761    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:06.604007    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:07.245316    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:07.753419    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:07.759725    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:07.771908    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:07.794114    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:07.834482    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:07.915789    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:08.077950    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:08.398109    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:08.526143    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:09.038451    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:10.320766    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:11.088456    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:12.881065    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:16.210811    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:18.001486    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:26.453062    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:27.517482    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:44:28.244216    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:46.935198    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:44:48.725073    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:44:55.203359    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:45:04.513259    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-093929 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.205928836s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-093929 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-093929 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-093929 describe deploy/metrics-server -n kube-system: exit status 1 (35.35532ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-093929" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-093929 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-093929
helpers_test.go:235: (dbg) docker inspect old-k8s-version-093929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd",
	        "Created": "2022-11-07T17:39:36.809249754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:39:37.143637295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd-json.log",
	        "Name": "/old-k8s-version-093929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-093929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093929",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093929",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df36e65f1c697018769e2cb073b4d1ab9327d2cbebd6626fc0b46fa243b9ed92",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53729"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53730"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53731"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53732"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53733"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df36e65f1c69",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-093929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "50811d3bbfa7",
	                        "old-k8s-version-093929"
	                    ],
	                    "NetworkID": "85b1c6253454469ed38e54ce96d4baef11c9b5b3afd90032e806121a14971f03",
	                    "EndpointID": "38533a1a1835e26950d13a17e03943cf6a91601ddc3f8e93c9594d62c6d8df36",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 6 (397.66003ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:45:11.847847   17703 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-093929" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-093929" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (489.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-093929 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E1107 09:45:14.062276    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:15.342621    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:17.903251    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:23.024244    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:27.896769    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:45:29.686640    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:45:33.266066    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:38.300549    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:45:53.746797    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:46:12.406245    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:46:34.710288    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:46:40.097654    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:46:49.820168    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:46:51.611331    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:47:20.669468    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:47:48.360420    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:47:53.285864    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:47:56.633782    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:48:03.237653    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 09:48:14.026033    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 09:48:21.658818    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-093929 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m5.209432509s)

                                                
                                                
-- stdout --
	* [old-k8s-version-093929] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-093929 in cluster old-k8s-version-093929
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-093929" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 09:45:13.889502   17733 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:45:13.889677   17733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:45:13.889684   17733 out.go:309] Setting ErrFile to fd 2...
	I1107 09:45:13.889689   17733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:45:13.889813   17733 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:45:13.890328   17733 out.go:303] Setting JSON to false
	I1107 09:45:13.908955   17733 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4488,"bootTime":1667838625,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 09:45:13.909106   17733 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 09:45:13.931197   17733 out.go:177] * [old-k8s-version-093929] minikube v1.28.0 on Darwin 13.0
	I1107 09:45:13.975109   17733 notify.go:220] Checking for updates...
	I1107 09:45:13.996972   17733 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 09:45:14.040159   17733 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:45:14.062244   17733 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 09:45:14.084325   17733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 09:45:14.106173   17733 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 09:45:14.128177   17733 config.go:180] Loaded profile config "old-k8s-version-093929": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1107 09:45:14.149996   17733 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1107 09:45:14.172040   17733 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 09:45:14.236180   17733 docker.go:137] docker version: linux-20.10.20
	I1107 09:45:14.236327   17733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:45:14.379209   17733 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 17:45:14.302561973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:45:14.400150   17733 out.go:177] * Using the docker driver based on existing profile
	I1107 09:45:14.421223   17733 start.go:282] selected driver: docker
	I1107 09:45:14.421246   17733 start.go:808] validating driver "docker" against &{Name:old-k8s-version-093929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:45:14.421393   17733 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 09:45:14.425053   17733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:45:14.566982   17733 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 17:45:14.490733431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:45:14.567133   17733 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 09:45:14.567152   17733 cni.go:95] Creating CNI manager for ""
	I1107 09:45:14.567162   17733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:45:14.567174   17733 start_flags.go:317] config:
	{Name:old-k8s-version-093929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:45:14.609789   17733 out.go:177] * Starting control plane node old-k8s-version-093929 in cluster old-k8s-version-093929
	I1107 09:45:14.633035   17733 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:45:14.655000   17733 out.go:177] * Pulling base image ...
	I1107 09:45:14.698225   17733 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:45:14.698231   17733 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:45:14.698332   17733 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 09:45:14.698361   17733 cache.go:57] Caching tarball of preloaded images
	I1107 09:45:14.698665   17733 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:45:14.698694   17733 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1107 09:45:14.699635   17733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/config.json ...
	I1107 09:45:14.754619   17733 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:45:14.754636   17733 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:45:14.754646   17733 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:45:14.754693   17733 start.go:364] acquiring machines lock for old-k8s-version-093929: {Name:mk1219dfd9d2598aff29791b7c2ffd86213e8a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:45:14.754785   17733 start.go:368] acquired machines lock for "old-k8s-version-093929" in 70.909µs
	I1107 09:45:14.754812   17733 start.go:96] Skipping create...Using existing machine configuration
	I1107 09:45:14.754823   17733 fix.go:55] fixHost starting: 
	I1107 09:45:14.755095   17733 cli_runner.go:164] Run: docker container inspect old-k8s-version-093929 --format={{.State.Status}}
	I1107 09:45:14.813579   17733 fix.go:103] recreateIfNeeded on old-k8s-version-093929: state=Stopped err=<nil>
	W1107 09:45:14.813611   17733 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 09:45:14.835997   17733 out.go:177] * Restarting existing docker container for "old-k8s-version-093929" ...
	I1107 09:45:14.857346   17733 cli_runner.go:164] Run: docker start old-k8s-version-093929
	I1107 09:45:15.184086   17733 cli_runner.go:164] Run: docker container inspect old-k8s-version-093929 --format={{.State.Status}}
	I1107 09:45:15.294015   17733 kic.go:415] container "old-k8s-version-093929" state is running.
	I1107 09:45:15.294633   17733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093929
	I1107 09:45:15.356337   17733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/config.json ...
	I1107 09:45:15.356762   17733 machine.go:88] provisioning docker machine ...
	I1107 09:45:15.356785   17733 ubuntu.go:169] provisioning hostname "old-k8s-version-093929"
	I1107 09:45:15.356869   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:15.415664   17733 main.go:134] libmachine: Using SSH client type: native
	I1107 09:45:15.415899   17733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53969 <nil> <nil>}
	I1107 09:45:15.415913   17733 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-093929 && echo "old-k8s-version-093929" | sudo tee /etc/hostname
	I1107 09:45:15.540978   17733 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-093929
	
	I1107 09:45:15.541112   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:15.600283   17733 main.go:134] libmachine: Using SSH client type: native
	I1107 09:45:15.600454   17733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53969 <nil> <nil>}
	I1107 09:45:15.600467   17733 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-093929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-093929/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-093929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:45:15.720455   17733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:45:15.720475   17733 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:45:15.720497   17733 ubuntu.go:177] setting up certificates
	I1107 09:45:15.720506   17733 provision.go:83] configureAuth start
	I1107 09:45:15.720592   17733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093929
	I1107 09:45:15.779001   17733 provision.go:138] copyHostCerts
	I1107 09:45:15.779098   17733 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:45:15.779109   17733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:45:15.779213   17733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:45:15.779428   17733 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:45:15.779436   17733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:45:15.779503   17733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:45:15.779664   17733 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:45:15.779669   17733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:45:15.779728   17733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:45:15.779866   17733 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-093929 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-093929]
	I1107 09:45:15.866927   17733 provision.go:172] copyRemoteCerts
	I1107 09:45:15.867000   17733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:45:15.867063   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:15.925274   17733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53969 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:45:16.011628   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:45:16.028887   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1107 09:45:16.046182   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 09:45:16.063946   17733 provision.go:86] duration metric: configureAuth took 343.41517ms
	I1107 09:45:16.063960   17733 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:45:16.064125   17733 config.go:180] Loaded profile config "old-k8s-version-093929": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1107 09:45:16.064199   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:16.122402   17733 main.go:134] libmachine: Using SSH client type: native
	I1107 09:45:16.122575   17733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53969 <nil> <nil>}
	I1107 09:45:16.122584   17733 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:45:16.238380   17733 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:45:16.238396   17733 ubuntu.go:71] root file system type: overlay
	I1107 09:45:16.238551   17733 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:45:16.238639   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:16.296193   17733 main.go:134] libmachine: Using SSH client type: native
	I1107 09:45:16.296346   17733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53969 <nil> <nil>}
	I1107 09:45:16.296399   17733 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:45:16.418511   17733 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:45:16.418614   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:16.476238   17733 main.go:134] libmachine: Using SSH client type: native
	I1107 09:45:16.476402   17733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 53969 <nil> <nil>}
	I1107 09:45:16.476415   17733 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:45:16.597875   17733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:45:16.597894   17733 machine.go:91] provisioned docker machine in 1.241085458s
	I1107 09:45:16.597906   17733 start.go:300] post-start starting for "old-k8s-version-093929" (driver="docker")
	I1107 09:45:16.597914   17733 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:45:16.598000   17733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:45:16.598068   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:16.656004   17733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53969 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:45:16.743152   17733 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:45:16.746683   17733 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:45:16.746701   17733 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:45:16.746708   17733 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:45:16.746712   17733 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:45:16.746720   17733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:45:16.746813   17733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:45:16.747011   17733 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:45:16.747206   17733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:45:16.754374   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:45:16.771434   17733 start.go:303] post-start completed in 173.509001ms
	I1107 09:45:16.771519   17733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:45:16.771583   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:16.829408   17733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53969 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:45:16.911348   17733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:45:16.915985   17733 fix.go:57] fixHost completed within 2.161094037s
	I1107 09:45:16.916005   17733 start.go:83] releasing machines lock for "old-k8s-version-093929", held for 2.161145541s
	I1107 09:45:16.916125   17733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093929
	I1107 09:45:16.973917   17733 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1107 09:45:16.973933   17733 ssh_runner.go:195] Run: systemctl --version
	I1107 09:45:16.974002   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:16.974015   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:17.033596   17733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53969 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:45:17.034814   17733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53969 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/old-k8s-version-093929/id_rsa Username:docker}
	I1107 09:45:17.370748   17733 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:45:17.380658   17733 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:45:17.380732   17733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:45:17.392591   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:45:17.405830   17733 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:45:17.478625   17733 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:45:17.546746   17733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:45:17.616033   17733 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:45:17.823335   17733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:45:17.851644   17733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:45:17.923258   17733 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1107 09:45:17.923482   17733 cli_runner.go:164] Run: docker exec -t old-k8s-version-093929 dig +short host.docker.internal
	I1107 09:45:18.036245   17733 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:45:18.036360   17733 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:45:18.040614   17733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:45:18.050177   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:18.107881   17733 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 09:45:18.107998   17733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:45:18.130710   17733 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:45:18.130732   17733 docker.go:543] Images already preloaded, skipping extraction
	I1107 09:45:18.130850   17733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:45:18.153495   17733 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1107 09:45:18.153522   17733 cache_images.go:84] Images are preloaded, skipping loading
	I1107 09:45:18.153623   17733 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:45:18.221030   17733 cni.go:95] Creating CNI manager for ""
	I1107 09:45:18.221045   17733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:45:18.221055   17733 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:45:18.221078   17733 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-093929 NodeName:old-k8s-version-093929 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:45:18.221206   17733 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-093929"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-093929
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:45:18.221282   17733 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-093929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:45:18.221355   17733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1107 09:45:18.229080   17733 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:45:18.229143   17733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 09:45:18.236242   17733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1107 09:45:18.248639   17733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:45:18.260968   17733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1107 09:45:18.273982   17733 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:45:18.280950   17733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:45:18.292696   17733 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929 for IP: 192.168.76.2
	I1107 09:45:18.292838   17733 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:45:18.292913   17733 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:45:18.293033   17733 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/client.key
	I1107 09:45:18.293119   17733 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key.31bdca25
	I1107 09:45:18.293190   17733 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.key
	I1107 09:45:18.293431   17733 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:45:18.293469   17733 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:45:18.293483   17733 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:45:18.293521   17733 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:45:18.293582   17733 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:45:18.293616   17733 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:45:18.293685   17733 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:45:18.294320   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 09:45:18.318142   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 09:45:18.339165   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 09:45:18.357343   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/old-k8s-version-093929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 09:45:18.374176   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:45:18.392377   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:45:18.409276   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:45:18.426308   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:45:18.443595   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:45:18.460306   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:45:18.477314   17733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:45:18.495445   17733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 09:45:18.508297   17733 ssh_runner.go:195] Run: openssl version
	I1107 09:45:18.513642   17733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:45:18.521574   17733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:45:18.525679   17733 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:45:18.525736   17733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:45:18.530850   17733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:45:18.538110   17733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:45:18.545934   17733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:45:18.550182   17733 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:45:18.550231   17733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:45:18.555728   17733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:45:18.564661   17733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:45:18.572556   17733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:45:18.576662   17733 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:45:18.576712   17733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:45:18.581957   17733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:45:18.589307   17733 kubeadm.go:396] StartCluster: {Name:old-k8s-version-093929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-093929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:45:18.589433   17733 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:45:18.612161   17733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 09:45:18.620072   17733 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 09:45:18.620092   17733 kubeadm.go:627] restartCluster start
	I1107 09:45:18.620150   17733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 09:45:18.627216   17733 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:18.627293   17733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-093929
	I1107 09:45:18.689355   17733 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-093929" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:45:18.689526   17733 kubeconfig.go:146] "old-k8s-version-093929" context is missing from /Users/jenkins/minikube-integration/15310-2115/kubeconfig - will repair!
	I1107 09:45:18.690912   17733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:45:18.692297   17733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 09:45:18.700045   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:18.700113   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:18.708464   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:18.908574   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:18.912604   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:18.921579   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:19.109599   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:19.109708   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:19.119939   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:19.308633   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:19.308723   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:19.317257   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:19.510637   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:19.510837   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:19.521767   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:19.709698   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:19.709860   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:19.720307   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:19.908839   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:19.908956   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:19.919215   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:20.108975   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:20.109179   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:20.119875   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:20.308600   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:20.308667   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:20.317163   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:20.508613   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:20.508726   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:20.518165   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:20.710695   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:20.710835   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:20.721845   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:20.910689   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:20.910922   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:20.921825   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:21.108746   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:21.108861   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:21.117940   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:21.308651   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:21.308721   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:21.317846   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:21.508845   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:21.508969   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:21.519376   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:21.708696   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:21.708822   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:21.718829   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:21.718842   17733 api_server.go:165] Checking apiserver status ...
	I1107 09:45:21.718898   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:45:21.727267   17733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:45:21.727278   17733 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1107 09:45:21.727287   17733 kubeadm.go:1114] stopping kube-system containers ...
	I1107 09:45:21.727368   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:45:21.748132   17733 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 09:45:21.758379   17733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:45:21.766205   17733 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Nov  7 17:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Nov  7 17:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Nov  7 17:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Nov  7 17:41 /etc/kubernetes/scheduler.conf
	
	I1107 09:45:21.766275   17733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 09:45:21.773491   17733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 09:45:21.780663   17733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 09:45:21.787999   17733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 09:45:21.795334   17733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:45:21.803006   17733 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 09:45:21.803017   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:45:21.856701   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:45:22.402576   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:45:22.599845   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:45:22.659580   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:45:22.715274   17733 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:45:22.715355   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:23.226183   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:23.724570   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:24.225089   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:24.725048   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:25.224648   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:25.724414   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:26.225966   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:26.724545   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:27.224847   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:27.725614   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:28.226453   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:28.725045   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:29.224811   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:29.724359   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:30.225382   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:30.724959   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:31.224524   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:31.726451   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:32.225002   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:32.726028   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:33.226491   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:33.724584   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:34.226487   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:34.725168   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:35.224602   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:35.724886   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:36.224746   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:36.724742   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:37.225442   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:37.725512   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:38.224983   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:38.725189   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:39.224895   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:39.725087   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:40.225107   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:40.725305   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:41.225434   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:41.724987   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:42.225049   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:42.725750   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:43.225047   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:43.724623   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:44.226805   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:44.724893   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:45.225707   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:45.725333   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:46.224804   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:46.724763   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:47.226942   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:47.724979   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:48.226937   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:48.725681   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:49.225296   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:49.726410   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:50.225982   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:50.725086   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:51.225013   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:51.725934   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:52.225678   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:52.725077   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:53.225416   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:53.725780   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:54.225489   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:54.726414   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:55.225197   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:55.725256   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:56.227200   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:56.726930   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:57.225467   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:57.725266   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:58.227214   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:58.727243   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:59.227356   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:45:59.727255   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:00.225245   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:00.725792   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:01.227347   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:01.727005   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:02.225457   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:02.727372   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:03.227483   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:03.726797   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:04.225934   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:04.725475   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:05.225507   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:05.725478   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:06.226559   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:06.725735   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:07.225621   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:07.725796   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:08.227104   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:08.725565   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:09.225377   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:09.725835   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:10.225865   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:10.727590   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:11.225513   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:11.726633   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:12.227746   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:12.727644   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:13.225617   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:13.727676   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:14.226331   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:14.727691   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:15.225943   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:15.727764   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:16.226450   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:16.726759   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:17.225914   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:17.726001   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:18.227846   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:18.725715   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:19.225832   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:19.725982   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:20.226248   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:20.725838   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:21.227937   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:21.726223   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:22.226027   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:22.728092   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:22.752263   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.752279   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:22.752363   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:22.774930   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.774943   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:22.775029   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:22.796707   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.796719   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:22.796804   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:22.821871   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.821883   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:22.821971   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:22.842858   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.842871   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:22.842954   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:22.865518   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.865533   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:22.865615   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:22.887428   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.887440   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:22.887527   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:22.910012   17733 logs.go:274] 0 containers: []
	W1107 09:46:22.910023   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:22.910030   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:22.910037   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:22.948353   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:22.948368   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:22.960338   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:22.960350   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:23.014593   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:23.014611   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:23.014618   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:23.028640   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:23.028652   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:46:25.077389   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048662785s)
	I1107 09:46:27.578618   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:27.728298   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:27.753287   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.753299   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:27.753386   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:27.777575   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.777587   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:27.777669   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:27.804486   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.804499   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:27.804589   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:27.828290   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.828302   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:27.828388   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:27.852985   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.852997   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:27.853079   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:27.875141   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.875153   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:27.875242   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:27.898436   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.898448   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:27.898534   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:27.921739   17733 logs.go:274] 0 containers: []
	W1107 09:46:27.921751   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:27.921758   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:27.921770   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:27.959970   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:27.959983   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:27.973121   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:27.973134   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:28.026719   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:28.026730   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:28.026737   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:28.040495   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:28.040507   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:46:30.084026   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043445823s)
	I1107 09:46:32.584365   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:32.728350   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:32.752873   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.752891   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:32.752978   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:32.774972   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.774983   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:32.775068   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:32.800843   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.800890   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:32.801013   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:32.824576   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.824588   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:32.824676   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:32.847027   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.847039   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:32.847125   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:32.869888   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.869899   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:32.869981   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:32.899824   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.899838   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:32.899930   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:32.927777   17733 logs.go:274] 0 containers: []
	W1107 09:46:32.927789   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:32.927796   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:32.927806   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:32.941578   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:32.941594   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:46:34.987533   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045866461s)
	I1107 09:46:34.987640   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:34.987649   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:35.025198   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:35.025213   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:35.036828   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:35.036844   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:35.091634   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:37.592725   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:37.726505   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:37.751295   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.751307   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:37.751391   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:37.773232   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.773245   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:37.773327   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:37.797850   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.797872   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:37.797975   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:37.821961   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.821972   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:37.822057   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:37.845927   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.845942   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:37.846023   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:37.868180   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.868192   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:37.868277   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:37.890832   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.890844   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:37.890928   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:37.913279   17733 logs.go:274] 0 containers: []
	W1107 09:46:37.913291   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:37.913297   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:37.913304   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:37.952116   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:37.952129   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:37.963623   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:37.963636   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:38.016999   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:38.017017   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:38.017024   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:38.031148   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:38.031160   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:46:40.078353   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047117544s)
	I1107 09:46:42.578822   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:42.726555   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:42.751174   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.751185   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:42.751266   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:42.774484   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.774497   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:42.774584   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:42.799748   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.799762   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:42.799861   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:42.828389   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.828411   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:42.828497   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:42.852069   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.852081   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:42.852166   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:42.875721   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.875734   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:42.875817   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:42.898898   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.898911   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:42.898996   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:42.922552   17733 logs.go:274] 0 containers: []
	W1107 09:46:42.922563   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:42.922572   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:42.922583   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:42.971922   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:42.971942   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:42.984378   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:42.984390   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:43.037368   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:43.037379   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:43.037385   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:43.051127   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:43.051138   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:46:45.098081   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046868734s)
	I1107 09:46:47.599241   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:47.726727   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:47.750312   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.750324   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:47.750404   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:47.771974   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.771990   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:47.772075   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:47.797884   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.797895   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:47.797978   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:47.822896   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.822914   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:47.823003   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:47.845310   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.845323   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:47.845406   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:47.866584   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.866595   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:47.866674   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:47.889695   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.889708   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:47.889793   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:47.912336   17733 logs.go:274] 0 containers: []
	W1107 09:46:47.912348   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:47.912356   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:47.912368   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:47.950725   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:47.950737   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:47.962361   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:47.962374   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:48.023241   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:48.023251   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:48.023258   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:48.038365   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:48.038379   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:46:50.087403   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048950112s)
	I1107 09:46:52.587812   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:52.728575   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:52.778820   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.778837   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:52.778932   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:52.805248   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.805261   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:52.805346   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:52.827462   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.827475   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:52.827569   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:52.849891   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.849903   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:52.849988   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:52.872044   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.872055   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:52.872138   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:52.894133   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.894146   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:52.894232   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:52.917418   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.917433   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:52.917529   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:52.939117   17733 logs.go:274] 0 containers: []
	W1107 09:46:52.939128   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:52.939136   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:52.939143   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:52.992442   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:52.992454   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:52.992460   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:53.006640   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:53.006654   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:46:55.067878   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061147832s)
	I1107 09:46:55.068031   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:55.068041   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:55.106193   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:55.106206   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:57.619344   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:46:57.729126   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:46:57.756298   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.756309   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:46:57.756393   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:46:57.779573   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.779590   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:46:57.779677   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:46:57.807180   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.807193   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:46:57.807280   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:46:57.828744   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.828758   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:46:57.828844   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:46:57.850922   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.850933   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:46:57.851035   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:46:57.873288   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.873300   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:46:57.873392   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:46:57.895304   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.895316   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:46:57.895414   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:46:57.917201   17733 logs.go:274] 0 containers: []
	W1107 09:46:57.917213   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:46:57.917220   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:46:57.917227   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:46:57.955383   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:46:57.955397   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:46:57.966538   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:46:57.966550   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:46:58.019046   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:46:58.019059   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:46:58.019069   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:46:58.032766   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:46:58.032778   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:00.077719   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044867873s)
	I1107 09:47:02.578202   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:02.727136   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:02.751686   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.751699   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:02.751790   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:02.775769   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.775781   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:02.775865   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:02.798499   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.798512   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:02.798596   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:02.823472   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.823483   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:02.823564   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:02.845505   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.845518   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:02.845596   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:02.869188   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.869201   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:02.869285   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:02.891353   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.891365   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:02.891448   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:02.914320   17733 logs.go:274] 0 containers: []
	W1107 09:47:02.914331   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:02.914338   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:02.914346   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:02.952692   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:02.952705   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:02.964585   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:02.964597   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:03.019342   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:03.019362   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:03.019369   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:03.033955   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:03.033968   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:05.081985   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047941874s)
	I1107 09:47:07.582709   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:07.727364   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:07.751224   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.751236   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:07.751322   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:07.773351   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.773363   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:07.773450   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:07.797631   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.797643   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:07.797721   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:07.821721   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.821734   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:07.821806   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:07.844372   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.844384   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:07.844462   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:07.866079   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.866091   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:07.866177   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:07.888992   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.889003   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:07.889081   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:07.910255   17733 logs.go:274] 0 containers: []
	W1107 09:47:07.910265   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:07.910272   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:07.910278   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:07.921878   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:07.921891   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:07.975389   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:07.975408   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:07.975414   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:07.988838   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:07.988849   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:10.033303   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044379575s)
	I1107 09:47:10.033412   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:10.033418   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:12.570662   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:12.727864   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:12.752270   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.752281   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:12.752366   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:12.779158   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.779169   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:12.779252   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:12.803728   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.803742   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:12.803834   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:12.828638   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.828649   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:12.828755   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:12.851144   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.851156   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:12.851236   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:12.873327   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.873338   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:12.873421   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:12.896106   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.896119   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:12.896202   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:12.918925   17733 logs.go:274] 0 containers: []
	W1107 09:47:12.918939   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:12.918946   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:12.918953   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:12.957479   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:12.957493   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:12.968892   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:12.968905   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:13.022511   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:13.022521   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:13.022527   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:13.036347   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:13.036358   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:15.082498   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046066432s)
	I1107 09:47:17.583103   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:17.729663   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:17.755974   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.755986   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:17.756065   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:17.780215   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.780230   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:17.780320   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:17.805762   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.805773   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:17.805851   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:17.828507   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.828518   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:17.828599   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:17.850789   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.850801   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:17.850880   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:17.872791   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.872803   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:17.872886   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:17.895567   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.895578   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:17.895661   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:17.917960   17733 logs.go:274] 0 containers: []
	W1107 09:47:17.917971   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:17.917978   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:17.917986   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:17.956938   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:17.956952   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:17.968813   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:17.968826   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:18.023827   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:18.023839   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:18.023846   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:18.038690   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:18.038703   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:20.084574   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04579755s)
	I1107 09:47:22.584906   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:22.729865   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:22.754806   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.754823   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:22.754920   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:22.780192   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.780204   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:22.780287   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:22.804020   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.804032   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:22.804117   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:22.826911   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.826925   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:22.827005   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:22.849439   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.849450   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:22.849531   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:22.871462   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.871475   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:22.871557   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:22.894588   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.894604   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:22.894684   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:22.916942   17733 logs.go:274] 0 containers: []
	W1107 09:47:22.916954   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:22.916964   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:22.916971   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:22.930887   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:22.930899   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:24.980560   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04958773s)
	I1107 09:47:24.980671   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:24.980680   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:25.018246   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:25.018259   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:25.029724   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:25.029736   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:25.083589   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:27.585965   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:27.728545   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:27.752532   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.752545   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:27.752638   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:27.775724   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.775737   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:27.775824   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:27.798941   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.798954   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:27.799035   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:27.823458   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.823473   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:27.823554   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:27.845444   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.845454   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:27.845535   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:27.867422   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.867434   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:27.867517   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:27.890229   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.890240   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:27.890319   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:27.912623   17733 logs.go:274] 0 containers: []
	W1107 09:47:27.912636   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:27.912647   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:27.912654   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:27.950943   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:27.950956   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:27.963443   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:27.963457   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:28.016669   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:28.016679   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:28.016685   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:28.030826   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:28.030839   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:30.075866   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044952832s)
	I1107 09:47:32.578205   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:32.728064   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:32.752577   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.752591   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:32.752687   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:32.776889   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.776903   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:32.776993   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:32.799121   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.799134   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:32.799216   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:32.823308   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.823321   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:32.823403   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:32.844568   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.844579   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:32.844662   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:32.867041   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.867053   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:32.867138   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:32.888377   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.888390   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:32.888471   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:32.910251   17733 logs.go:274] 0 containers: []
	W1107 09:47:32.910264   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:32.910271   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:32.910278   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:32.947875   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:32.947889   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:32.959352   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:32.959365   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:33.015284   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:33.015298   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:33.015306   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:33.029372   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:33.029387   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:35.078139   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048676719s)
	I1107 09:47:37.579267   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:37.728299   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:37.754299   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.754311   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:37.754397   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:37.777666   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.777678   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:37.777759   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:37.799432   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.799452   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:37.799536   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:37.824654   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.824665   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:37.824751   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:37.847498   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.847509   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:37.847587   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:37.870134   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.870147   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:37.870229   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:37.892437   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.892448   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:37.892530   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:37.914377   17733 logs.go:274] 0 containers: []
	W1107 09:47:37.914394   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:37.914402   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:37.914410   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:37.954325   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:37.954370   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:37.966344   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:37.966357   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:38.020068   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:38.020078   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:38.020084   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:38.034252   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:38.034266   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:40.081466   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047126135s)
	I1107 09:47:42.583900   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:42.728489   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:42.756538   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.756549   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:42.756638   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:42.778717   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.778729   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:42.778813   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:42.802536   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.802548   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:42.802636   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:42.826326   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.826338   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:42.826423   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:42.849069   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.849080   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:42.849161   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:42.872360   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.872372   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:42.872454   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:42.895332   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.895345   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:42.895431   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:42.921193   17733 logs.go:274] 0 containers: []
	W1107 09:47:42.921206   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:42.921213   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:42.921220   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:42.958814   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:42.958828   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:42.970943   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:42.970955   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:43.024035   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:43.024045   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:43.024051   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:43.038160   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:43.038171   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:45.084507   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046259685s)
	I1107 09:47:47.586317   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:47.730549   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:47.754792   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.754804   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:47.754886   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:47.777120   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.777131   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:47.777222   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:47.799419   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.799431   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:47.799510   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:47.824877   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.824888   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:47.824974   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:47.846433   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.846444   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:47.846533   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:47.868582   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.868593   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:47.868680   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:47.892146   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.892158   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:47.892243   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:47.914765   17733 logs.go:274] 0 containers: []
	W1107 09:47:47.914785   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:47.914792   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:47.914798   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:47.952330   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:47.952345   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:47.965560   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:47.965574   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:48.023948   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:48.023958   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:48.023965   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:48.040320   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:48.040332   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:50.085711   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045304545s)
	I1107 09:47:52.586342   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:52.728540   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:52.755298   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.755309   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:52.755394   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:52.780460   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.780472   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:52.780552   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:52.803728   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.803743   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:52.803841   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:52.831885   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.831900   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:52.831987   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:52.855090   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.855102   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:52.855184   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:52.885297   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.885309   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:52.885408   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:52.909273   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.909286   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:52.909369   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:52.934319   17733 logs.go:274] 0 containers: []
	W1107 09:47:52.934331   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:52.934338   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:47:52.934346   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:47:52.994190   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:47:52.994202   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:47:52.994211   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:47:53.011347   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:53.011361   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:55.059822   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048388007s)
	I1107 09:47:55.059935   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:55.059941   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:47:55.099193   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:47:55.099215   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:47:57.612869   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:47:57.728835   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:47:57.754041   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.754053   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:47:57.754141   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:47:57.777897   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.777908   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:47:57.777992   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:47:57.814000   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.814013   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:47:57.814103   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:47:57.837873   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.837885   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:47:57.837966   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:47:57.861516   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.861529   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:47:57.861608   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:47:57.883884   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.883896   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:47:57.883981   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:47:57.906178   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.906189   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:47:57.906275   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:47:57.927696   17733 logs.go:274] 0 containers: []
	W1107 09:47:57.927707   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:47:57.927714   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:47:57.927721   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:47:59.979015   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051218535s)
	I1107 09:47:59.979132   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:47:59.979140   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:00.025947   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:00.025969   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:00.040266   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:00.040282   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:00.102452   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:00.102523   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:00.102535   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:02.618642   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:02.731004   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:02.755185   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.755202   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:02.755297   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:02.778261   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.778273   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:02.778356   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:02.799458   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.799472   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:02.799557   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:02.822375   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.822411   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:02.822502   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:02.845921   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.845933   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:02.846011   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:02.871601   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.871612   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:02.871693   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:02.894098   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.894117   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:02.894210   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:02.915890   17733 logs.go:274] 0 containers: []
	W1107 09:48:02.915903   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:02.915910   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:02.915918   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:02.955044   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:02.955059   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:02.967244   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:02.967258   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:03.024079   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:03.024094   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:03.024100   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:03.037947   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:03.037960   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:05.090846   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052807747s)
	I1107 09:48:07.591820   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:07.729108   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:07.752791   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.752806   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:07.752902   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:07.776651   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.776663   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:07.776747   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:07.798354   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.798366   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:07.798448   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:07.820741   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.820752   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:07.820835   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:07.842308   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.842320   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:07.842400   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:07.865796   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.865808   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:07.865886   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:07.889033   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.889046   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:07.889128   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:07.911525   17733 logs.go:274] 0 containers: []
	W1107 09:48:07.911538   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:07.911548   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:07.911556   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:07.925972   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:07.925984   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:09.974537   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048480686s)
	I1107 09:48:09.974645   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:09.974652   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:10.012844   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:10.012857   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:10.025069   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:10.025082   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:10.083312   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:12.583570   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:12.729902   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:12.757892   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.757904   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:12.757998   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:12.782954   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.782965   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:12.783044   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:12.806666   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.806679   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:12.806760   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:12.831937   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.831949   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:12.832032   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:12.855735   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.855748   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:12.855849   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:12.878109   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.878121   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:12.878204   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:12.902530   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.902542   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:12.902623   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:12.924197   17733 logs.go:274] 0 containers: []
	W1107 09:48:12.924210   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:12.924218   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:12.924226   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:12.962644   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:12.962657   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:12.974242   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:12.974255   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:13.027334   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:13.027350   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:13.027358   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:13.042204   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:13.042218   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:15.088887   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046596253s)
	I1107 09:48:17.589867   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:17.731798   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:17.765337   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.765364   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:17.765460   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:17.787359   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.787370   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:17.787452   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:17.809324   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.809337   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:17.809421   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:17.833800   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.833818   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:17.833909   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:17.863769   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.863781   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:17.863882   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:17.888841   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.888854   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:17.888936   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:17.911602   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.911615   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:17.911694   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:17.953558   17733 logs.go:274] 0 containers: []
	W1107 09:48:17.953573   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:17.953582   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:17.953592   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:18.002584   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:18.002602   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:18.015296   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:18.015311   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:18.099465   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:18.099477   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:18.099483   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:18.113401   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:18.113413   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:20.163779   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04959693s)
	I1107 09:48:22.664971   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:22.733035   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:22.760892   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.760903   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:22.760986   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:22.784639   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.784652   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:22.784737   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:22.808504   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.808515   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:22.808599   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:22.831662   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.831676   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:22.831762   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:22.862638   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.862662   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:22.862761   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:22.888455   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.888478   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:22.888568   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:22.913899   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.913913   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:22.913994   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:22.937659   17733 logs.go:274] 0 containers: []
	W1107 09:48:22.937671   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:22.937680   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:22.937709   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:22.953411   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:22.953425   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:25.019616   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065601237s)
	I1107 09:48:25.019738   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:25.019748   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:25.068675   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:25.068729   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:25.082234   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:25.082253   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:25.151279   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:27.652034   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:27.733089   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:27.759692   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.759711   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:27.759813   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:27.785721   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.785737   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:27.785848   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:27.809977   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.809992   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:27.810094   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:27.832804   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.832816   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:27.832941   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:27.857479   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.857491   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:27.857584   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:27.887620   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.887635   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:27.887731   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:27.912384   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.912397   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:27.912515   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:27.940244   17733 logs.go:274] 0 containers: []
	W1107 09:48:27.940256   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:27.940264   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:27.940275   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:28.001411   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:28.001422   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:28.001429   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:28.017588   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:28.017601   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:30.065392   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047349188s)
	I1107 09:48:30.065507   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:30.065514   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:30.103567   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:30.103582   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:32.616726   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:32.733901   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:32.760188   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.760201   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:32.760291   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:32.791066   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.791079   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:32.791164   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:32.818827   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.818840   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:32.818926   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:32.848749   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.848768   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:32.848866   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:32.879611   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.879628   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:32.879741   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:32.909534   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.909549   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:32.909653   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:32.938352   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.938364   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:32.938459   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:32.966760   17733 logs.go:274] 0 containers: []
	W1107 09:48:32.966774   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:32.966782   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:32.966789   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:33.014414   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:33.014434   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:33.034701   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:33.034716   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:33.118986   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:33.118999   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:33.119008   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:33.140053   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:33.140067   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:35.199354   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058947574s)
	I1107 09:48:37.699978   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:37.735053   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:37.758896   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.758908   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:37.758991   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:37.783090   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.783102   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:37.783188   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:37.805654   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.805673   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:37.805756   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:37.827217   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.827228   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:37.827311   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:37.849486   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.849499   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:37.849581   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:37.872414   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.872428   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:37.872509   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:37.896554   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.896567   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:37.896665   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:37.919098   17733 logs.go:274] 0 containers: []
	W1107 09:48:37.919109   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:37.919116   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:37.919123   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:37.979421   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:37.979431   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:37.979438   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:37.993331   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:37.993344   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:40.038991   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045381496s)
	I1107 09:48:40.039101   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:40.039108   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:40.078206   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:40.078220   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:42.593774   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:42.736677   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:42.763000   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.763012   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:42.763108   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:42.787205   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.787216   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:42.787301   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:42.811513   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.811525   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:42.811605   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:42.835529   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.835543   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:42.835666   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:42.862394   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.862409   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:42.862477   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:42.887011   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.887022   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:42.887110   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:42.910542   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.910554   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:42.910647   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:42.938087   17733 logs.go:274] 0 containers: []
	W1107 09:48:42.938099   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:42.938106   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:42.938128   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:42.980723   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:42.980744   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:42.999483   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:42.999499   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:43.055226   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:43.055236   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:43.055242   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:43.068706   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:43.068720   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:45.115627   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046693175s)
	I1107 09:48:47.617162   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:47.735764   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:47.760193   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.760209   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:47.760291   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:47.783696   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.783709   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:47.783797   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:47.811055   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.811066   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:47.811147   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:47.835566   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.835579   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:47.835703   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:47.862646   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.862659   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:47.862730   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:47.886294   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.886306   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:47.886380   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:47.912544   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.912563   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:47.912663   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:47.938756   17733 logs.go:274] 0 containers: []
	W1107 09:48:47.938767   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:47.938775   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:47.938785   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:47.981671   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:47.981688   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:48.024767   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:48.024779   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:48.081329   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:48.081344   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:48.081351   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:48.096017   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:48.096033   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:50.142795   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046588413s)
	I1107 09:48:52.645213   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:52.735982   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:52.768747   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.768765   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:52.768920   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:52.810616   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.810633   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:52.810771   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:52.840260   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.840273   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:52.840372   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:52.870496   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.870508   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:52.870593   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:52.903493   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.903505   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:52.903586   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:52.933211   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.933224   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:52.933419   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:52.962443   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.962456   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:52.962562   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:52.997613   17733 logs.go:274] 0 containers: []
	W1107 09:48:52.997626   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:52.997635   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:52.997644   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:53.040441   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:53.040462   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:53.056226   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:53.056241   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:53.124253   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:53.124270   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:53.124278   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:53.141538   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:53.141554   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:48:55.189349   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047649812s)
	I1107 09:48:57.691792   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:48:57.738024   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:48:57.760675   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.760686   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:48:57.760776   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:48:57.783441   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.783452   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:48:57.783541   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:48:57.807826   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.807839   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:48:57.807916   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:48:57.831709   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.831722   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:48:57.831806   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:48:57.854734   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.854746   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:48:57.854831   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:48:57.879303   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.879316   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:48:57.879400   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:48:57.901892   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.901922   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:48:57.902069   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:48:57.925703   17733 logs.go:274] 0 containers: []
	W1107 09:48:57.925718   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:48:57.925725   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:48:57.925734   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:48:57.968896   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:48:57.968911   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:48:57.983948   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:48:57.983971   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:48:58.043942   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:48:58.043955   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:48:58.043964   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:48:58.058916   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:48:58.058931   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:49:00.118245   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05918372s)
	I1107 09:49:02.619153   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:49:02.738354   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:49:02.761860   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.761871   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:49:02.761954   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:49:02.784827   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.784839   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:49:02.784925   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:49:02.807472   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.807486   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:49:02.807574   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:49:02.831192   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.831203   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:49:02.831289   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:49:02.854938   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.854950   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:49:02.855039   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:49:02.884294   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.884306   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:49:02.884396   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:49:02.909839   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.909850   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:49:02.909941   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:49:02.935167   17733 logs.go:274] 0 containers: []
	W1107 09:49:02.935179   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:49:02.935187   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:49:02.935194   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:49:02.976345   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:49:02.976359   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:49:02.988913   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:49:02.988931   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:49:03.044965   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:49:03.045009   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:49:03.045021   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:49:03.059521   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:49:03.059536   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:49:05.108310   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048661407s)
	I1107 09:49:07.609410   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:49:07.736964   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:49:07.760717   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.760728   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:49:07.760810   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:49:07.782869   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.782880   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:49:07.782960   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:49:07.804587   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.804603   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:49:07.804691   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:49:07.825664   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.825677   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:49:07.825757   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:49:07.848606   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.848617   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:49:07.848699   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:49:07.873592   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.873608   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:49:07.873701   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:49:07.895981   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.895992   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:49:07.896084   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:49:07.917915   17733 logs.go:274] 0 containers: []
	W1107 09:49:07.917927   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:49:07.917935   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:49:07.917943   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:49:07.958120   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:49:07.958135   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:49:07.970416   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:49:07.970429   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:49:08.022016   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:49:08.022028   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:49:08.022034   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:49:08.035640   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:49:08.035653   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:49:10.080187   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044432419s)
	I1107 09:49:12.580909   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:49:12.737041   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:49:12.759906   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.759922   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:49:12.760021   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:49:12.782066   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.782078   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:49:12.782160   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:49:12.804973   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.804984   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:49:12.805069   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:49:12.826516   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.826527   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:49:12.826609   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:49:12.849349   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.849362   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:49:12.849445   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:49:12.872536   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.872547   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:49:12.872626   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:49:12.895831   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.895844   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:49:12.895932   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:49:12.917557   17733 logs.go:274] 0 containers: []
	W1107 09:49:12.917568   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:49:12.917575   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:49:12.917582   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:49:12.932613   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:49:12.932627   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:49:14.981755   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049035214s)
	I1107 09:49:14.981865   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:49:14.981872   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:49:15.020036   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:49:15.020049   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:49:15.031615   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:49:15.031632   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:49:15.088594   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:49:17.589281   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:49:17.737877   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:49:17.761855   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.761867   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:49:17.761948   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:49:17.784125   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.784136   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:49:17.784220   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:49:17.807830   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.807841   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:49:17.807929   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:49:17.838519   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.838530   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:49:17.838616   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:49:17.860916   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.860927   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:49:17.861011   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:49:17.882906   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.882919   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:49:17.883002   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:49:17.904600   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.904611   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:49:17.904693   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:49:17.926758   17733 logs.go:274] 0 containers: []
	W1107 09:49:17.926770   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:49:17.926777   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:49:17.926784   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:49:17.964931   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:49:17.964947   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:49:17.976541   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:49:17.976554   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:49:18.036810   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:49:18.036831   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:49:18.036839   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:49:18.052127   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:49:18.052138   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:49:20.099016   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046789243s)
	I1107 09:49:22.600349   17733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:49:22.737298   17733 kubeadm.go:631] restartCluster took 4m4.103744712s
	W1107 09:49:22.737386   17733 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1107 09:49:22.737403   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1107 09:49:23.154325   17733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:49:23.164172   17733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:49:23.171984   17733 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 09:49:23.172048   17733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:49:23.179950   17733 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 09:49:23.179972   17733 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 09:49:23.227299   17733 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1107 09:49:23.227339   17733 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 09:49:23.516211   17733 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 09:49:23.516297   17733 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 09:49:23.516371   17733 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 09:49:23.744172   17733 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:49:23.745104   17733 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:49:23.752877   17733 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1107 09:49:23.817062   17733 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:49:23.837944   17733 out.go:204]   - Generating certificates and keys ...
	I1107 09:49:23.838029   17733 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 09:49:23.838125   17733 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 09:49:23.838187   17733 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 09:49:23.838278   17733 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 09:49:23.838379   17733 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 09:49:23.838460   17733 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 09:49:23.838583   17733 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 09:49:23.838661   17733 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 09:49:23.838730   17733 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 09:49:23.838784   17733 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 09:49:23.838810   17733 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 09:49:23.838854   17733 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:49:23.967202   17733 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 09:49:24.215431   17733 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 09:49:24.460870   17733 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:49:24.710156   17733 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:49:24.710684   17733 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:49:24.732642   17733 out.go:204]   - Booting up control plane ...
	I1107 09:49:24.732809   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:49:24.732933   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:49:24.733028   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:49:24.733146   17733 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:49:24.733412   17733 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 09:50:04.691249   17733 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:50:04.691634   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:04.691818   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:50:09.689998   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:09.690210   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:50:19.684628   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:19.684860   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:50:39.672865   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:39.673065   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:51:19.648322   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:51:19.648492   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:51:19.648503   17733 kubeadm.go:317] 
	I1107 09:51:19.648530   17733 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:51:19.648560   17733 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:51:19.648565   17733 kubeadm.go:317] 
	I1107 09:51:19.648598   17733 kubeadm.go:317] This error is likely caused by:
	I1107 09:51:19.648622   17733 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:51:19.648697   17733 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:51:19.648709   17733 kubeadm.go:317] 
	I1107 09:51:19.648791   17733 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:51:19.648819   17733 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:51:19.648852   17733 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:51:19.648862   17733 kubeadm.go:317] 
	I1107 09:51:19.648941   17733 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:51:19.649019   17733 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:51:19.649088   17733 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:51:19.649123   17733 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:51:19.649174   17733 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:51:19.649197   17733 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:51:19.651759   17733 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:51:19.651871   17733 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:51:19.651951   17733 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:51:19.652038   17733 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:51:19.652123   17733 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1107 09:51:19.652253   17733 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 09:51:19.652281   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1107 09:51:20.069588   17733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:51:20.080241   17733 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 09:51:20.080306   17733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:51:20.087749   17733 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 09:51:20.087774   17733 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 09:51:20.135336   17733 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1107 09:51:20.135375   17733 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 09:51:20.437748   17733 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 09:51:20.437851   17733 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 09:51:20.437936   17733 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 09:51:20.653891   17733 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:51:20.654964   17733 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:51:20.661489   17733 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1107 09:51:20.734650   17733 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:51:20.777321   17733 out.go:204]   - Generating certificates and keys ...
	I1107 09:51:20.777393   17733 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 09:51:20.777463   17733 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 09:51:20.777558   17733 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 09:51:20.777667   17733 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 09:51:20.777794   17733 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 09:51:20.777876   17733 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 09:51:20.777939   17733 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 09:51:20.778025   17733 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 09:51:20.778112   17733 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 09:51:20.778173   17733 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 09:51:20.778243   17733 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 09:51:20.778287   17733 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:51:21.051028   17733 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 09:51:21.236036   17733 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 09:51:21.353889   17733 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:51:21.530808   17733 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:51:21.531461   17733 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:51:21.552917   17733 out.go:204]   - Booting up control plane ...
	I1107 09:51:21.553074   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:51:21.553250   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:51:21.553373   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:51:21.553510   17733 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:51:21.553802   17733 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 09:52:01.512622   17733 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:52:01.513359   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:01.513737   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:52:06.512067   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:06.512292   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:52:16.505926   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:16.506105   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:52:36.492667   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:36.492818   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:53:16.467026   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:53:16.467241   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:53:16.467265   17733 kubeadm.go:317] 
	I1107 09:53:16.467322   17733 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:53:16.467373   17733 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:53:16.467379   17733 kubeadm.go:317] 
	I1107 09:53:16.467427   17733 kubeadm.go:317] This error is likely caused by:
	I1107 09:53:16.467469   17733 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:53:16.467571   17733 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:53:16.467580   17733 kubeadm.go:317] 
	I1107 09:53:16.467709   17733 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:53:16.467747   17733 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:53:16.467784   17733 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:53:16.467797   17733 kubeadm.go:317] 
	I1107 09:53:16.467904   17733 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:53:16.467994   17733 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:53:16.468074   17733 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:53:16.468118   17733 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:53:16.468188   17733 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:53:16.468222   17733 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:53:16.470825   17733 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:53:16.470942   17733 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:53:16.471042   17733 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:53:16.471118   17733 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:53:16.471197   17733 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 09:53:16.471210   17733 kubeadm.go:398] StartCluster complete in 7m57.861468859s
	I1107 09:53:16.471307   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:53:16.494433   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.494445   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:53:16.494525   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:53:16.517244   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.517256   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:53:16.517337   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:53:16.540587   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.540600   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:53:16.540681   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:53:16.563465   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.563478   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:53:16.563557   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:53:16.586980   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.586993   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:53:16.587074   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:53:16.609201   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.609214   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:53:16.609298   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:53:16.631665   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.631677   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:53:16.631760   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:53:16.653649   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.653661   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:53:16.653669   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:53:16.653676   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:53:16.694973   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:53:16.694985   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:53:16.706841   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:53:16.706853   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:53:16.763729   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:53:16.763742   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:53:16.763749   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:53:16.780034   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:53:16.780049   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:53:18.829381   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049259356s)
	W1107 09:53:18.829500   17733 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 09:53:18.829514   17733 out.go:239] * 
	* 
	W1107 09:53:18.829654   17733 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:53:18.829672   17733 out.go:239] * 
	* 
	W1107 09:53:18.830335   17733 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:53:18.894853   17733 out.go:177] 
	W1107 09:53:18.937524   17733 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:53:18.959102   17733 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 09:53:18.959173   17733 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 09:53:19.021733   17733 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-093929 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-093929
helpers_test.go:235: (dbg) docker inspect old-k8s-version-093929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd",
	        "Created": "2022-11-07T17:39:36.809249754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:45:15.193119617Z",
	            "FinishedAt": "2022-11-07T17:45:12.325790936Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd-json.log",
	        "Name": "/old-k8s-version-093929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-093929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093929",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093929",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "78b2525896e2425ce76d453e029d9934dbd9eef1f99a4534a067bd3dedbbaf31",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53971"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53973"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/78b2525896e2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-093929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "50811d3bbfa7",
	                        "old-k8s-version-093929"
	                    ],
	                    "NetworkID": "85b1c6253454469ed38e54ce96d4baef11c9b5b3afd90032e806121a14971f03",
	                    "EndpointID": "0eb940817c54781bdb3cdfa6365fbd23635a65c83ea00240310b1565886e76f0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
E1107 09:53:19.312323    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (393.780996ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-093929 logs -n 25
E1107 09:53:21.672327    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-093929 logs -n 25: (3.566156212s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p enable-default-cni-092103                      | enable-default-cni-092103 | jenkins | v1.28.0 | 07 Nov 22 09:39 PST | 07 Nov 22 09:39 PST |
	| start   | -p kubenet-092103                                 | kubenet-092103            | jenkins | v1.28.0 | 07 Nov 22 09:39 PST | 07 Nov 22 09:40 PST |
	|         | --memory=2048                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                 |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                           |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	| delete  | -p calico-092105                                  | calico-092105             | jenkins | v1.28.0 | 07 Nov 22 09:39 PST | 07 Nov 22 09:39 PST |
	| start   | -p old-k8s-version-093929                         | old-k8s-version-093929    | jenkins | v1.28.0 | 07 Nov 22 09:39 PST |                     |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --kvm-network=default                             |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                           |         |         |                     |                     |
	|         | --keep-context=false                              |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                           |         |         |                     |                     |
	| ssh     | -p kubenet-092103 pgrep -a                        | kubenet-092103            | jenkins | v1.28.0 | 07 Nov 22 09:40 PST | 07 Nov 22 09:40 PST |
	|         | kubelet                                           |                           |         |         |                     |                     |
	| delete  | -p kubenet-092103                                 | kubenet-092103            | jenkins | v1.28.0 | 07 Nov 22 09:41 PST | 07 Nov 22 09:41 PST |
	| start   | -p no-preload-094130                              | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:41 PST | 07 Nov 22 09:42 PST |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                 |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-094130        | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:43 PST | 07 Nov 22 09:43 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                           |         |         |                     |                     |
	| stop    | -p no-preload-094130                              | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:43 PST | 07 Nov 22 09:43 PST |
	|         | --alsologtostderr -v=3                            |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-094130             | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:43 PST | 07 Nov 22 09:43 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-094130                              | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:43 PST | 07 Nov 22 09:48 PST |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                 |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-093929   | old-k8s-version-093929    | jenkins | v1.28.0 | 07 Nov 22 09:43 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-093929                         | old-k8s-version-093929    | jenkins | v1.28.0 | 07 Nov 22 09:45 PST | 07 Nov 22 09:45 PST |
	|         | --alsologtostderr -v=3                            |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-093929        | old-k8s-version-093929    | jenkins | v1.28.0 | 07 Nov 22 09:45 PST | 07 Nov 22 09:45 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-093929                         | old-k8s-version-093929    | jenkins | v1.28.0 | 07 Nov 22 09:45 PST |                     |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --kvm-network=default                             |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                           |         |         |                     |                     |
	|         | --keep-context=false                              |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                           |         |         |                     |                     |
	| ssh     | -p no-preload-094130 sudo                         | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:48 PST | 07 Nov 22 09:48 PST |
	|         | crictl images -o json                             |                           |         |         |                     |                     |
	| pause   | -p no-preload-094130                              | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:48 PST | 07 Nov 22 09:48 PST |
	|         | --alsologtostderr -v=1                            |                           |         |         |                     |                     |
	| unpause | -p no-preload-094130                              | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:48 PST | 07 Nov 22 09:48 PST |
	|         | --alsologtostderr -v=1                            |                           |         |         |                     |                     |
	| delete  | -p no-preload-094130                              | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:48 PST | 07 Nov 22 09:48 PST |
	| delete  | -p no-preload-094130                              | no-preload-094130         | jenkins | v1.28.0 | 07 Nov 22 09:48 PST | 07 Nov 22 09:48 PST |
	| start   | -p embed-certs-094848                             | embed-certs-094848        | jenkins | v1.28.0 | 07 Nov 22 09:48 PST | 07 Nov 22 09:49 PST |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-094848       | embed-certs-094848        | jenkins | v1.28.0 | 07 Nov 22 09:49 PST | 07 Nov 22 09:49 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                           |         |         |                     |                     |
	| stop    | -p embed-certs-094848                             | embed-certs-094848        | jenkins | v1.28.0 | 07 Nov 22 09:49 PST | 07 Nov 22 09:49 PST |
	|         | --alsologtostderr -v=3                            |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-094848            | embed-certs-094848        | jenkins | v1.28.0 | 07 Nov 22 09:49 PST | 07 Nov 22 09:49 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-094848                             | embed-certs-094848        | jenkins | v1.28.0 | 07 Nov 22 09:49 PST |                     |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	|---------|---------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 09:49:55
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 09:49:55.958143   18392 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:49:55.958424   18392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:49:55.958431   18392 out.go:309] Setting ErrFile to fd 2...
	I1107 09:49:55.958435   18392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:49:55.958560   18392 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:49:55.959078   18392 out.go:303] Setting JSON to false
	I1107 09:49:55.977745   18392 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4770,"bootTime":1667838625,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 09:49:55.977847   18392 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 09:49:55.999607   18392 out.go:177] * [embed-certs-094848] minikube v1.28.0 on Darwin 13.0
	I1107 09:49:56.021374   18392 notify.go:220] Checking for updates...
	I1107 09:49:56.043212   18392 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 09:49:56.065609   18392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:49:56.087494   18392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 09:49:56.109562   18392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 09:49:56.131549   18392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 09:49:56.154077   18392 config.go:180] Loaded profile config "embed-certs-094848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:49:56.154769   18392 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 09:49:56.215928   18392 docker.go:137] docker version: linux-20.10.20
	I1107 09:49:56.216072   18392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:49:56.355614   18392 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 17:49:56.268384235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:49:56.377705   18392 out.go:177] * Using the docker driver based on existing profile
	I1107 09:49:56.403095   18392 start.go:282] selected driver: docker
	I1107 09:49:56.403121   18392 start.go:808] validating driver "docker" against &{Name:embed-certs-094848 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-094848 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:49:56.403269   18392 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 09:49:56.407068   18392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 09:49:56.547275   18392 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 17:49:56.459884769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 09:49:56.547436   18392 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 09:49:56.547454   18392 cni.go:95] Creating CNI manager for ""
	I1107 09:49:56.547464   18392 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:49:56.547475   18392 start_flags.go:317] config:
	{Name:embed-certs-094848 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-094848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:49:56.569527   18392 out.go:177] * Starting control plane node embed-certs-094848 in cluster embed-certs-094848
	I1107 09:49:56.591289   18392 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 09:49:56.613120   18392 out.go:177] * Pulling base image ...
	I1107 09:49:56.655924   18392 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:49:56.655926   18392 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 09:49:56.655992   18392 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 09:49:56.656008   18392 cache.go:57] Caching tarball of preloaded images
	I1107 09:49:56.656160   18392 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 09:49:56.656176   18392 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 09:49:56.656748   18392 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/config.json ...
	I1107 09:49:56.712247   18392 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 09:49:56.712266   18392 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 09:49:56.712279   18392 cache.go:208] Successfully downloaded all kic artifacts
	I1107 09:49:56.712324   18392 start.go:364] acquiring machines lock for embed-certs-094848: {Name:mk26d0583805e8cc0e8208d98cc907ab7d146b09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 09:49:56.712415   18392 start.go:368] acquired machines lock for "embed-certs-094848" in 71.645µs
	I1107 09:49:56.712440   18392 start.go:96] Skipping create...Using existing machine configuration
	I1107 09:49:56.712451   18392 fix.go:55] fixHost starting: 
	I1107 09:49:56.712739   18392 cli_runner.go:164] Run: docker container inspect embed-certs-094848 --format={{.State.Status}}
	I1107 09:49:56.769008   18392 fix.go:103] recreateIfNeeded on embed-certs-094848: state=Stopped err=<nil>
	W1107 09:49:56.769036   18392 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 09:49:56.812568   18392 out.go:177] * Restarting existing docker container for "embed-certs-094848" ...
	I1107 09:49:56.833920   18392 cli_runner.go:164] Run: docker start embed-certs-094848
	I1107 09:49:57.153771   18392 cli_runner.go:164] Run: docker container inspect embed-certs-094848 --format={{.State.Status}}
	I1107 09:49:57.211887   18392 kic.go:415] container "embed-certs-094848" state is running.
	I1107 09:49:57.212462   18392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-094848
	I1107 09:49:57.274726   18392 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/config.json ...
	I1107 09:49:57.275228   18392 machine.go:88] provisioning docker machine ...
	I1107 09:49:57.275257   18392 ubuntu.go:169] provisioning hostname "embed-certs-094848"
	I1107 09:49:57.275377   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:57.337844   18392 main.go:134] libmachine: Using SSH client type: native
	I1107 09:49:57.338044   18392 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54132 <nil> <nil>}
	I1107 09:49:57.338059   18392 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-094848 && echo "embed-certs-094848" | sudo tee /etc/hostname
	I1107 09:49:57.471003   18392 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-094848
	
	I1107 09:49:57.471125   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:57.529636   18392 main.go:134] libmachine: Using SSH client type: native
	I1107 09:49:57.529795   18392 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54132 <nil> <nil>}
	I1107 09:49:57.529807   18392 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-094848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-094848/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-094848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 09:49:57.646392   18392 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:49:57.646420   18392 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 09:49:57.646446   18392 ubuntu.go:177] setting up certificates
	I1107 09:49:57.646454   18392 provision.go:83] configureAuth start
	I1107 09:49:57.646548   18392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-094848
	I1107 09:49:57.706476   18392 provision.go:138] copyHostCerts
	I1107 09:49:57.706587   18392 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 09:49:57.706598   18392 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 09:49:57.706708   18392 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 09:49:57.706921   18392 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 09:49:57.706930   18392 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 09:49:57.706999   18392 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 09:49:57.707262   18392 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 09:49:57.707269   18392 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 09:49:57.707331   18392 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 09:49:57.707470   18392 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.embed-certs-094848 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-094848]
	I1107 09:49:57.775634   18392 provision.go:172] copyRemoteCerts
	I1107 09:49:57.775714   18392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 09:49:57.775788   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:57.837565   18392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54132 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/embed-certs-094848/id_rsa Username:docker}
	I1107 09:49:57.923786   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 09:49:57.942130   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 09:49:57.960704   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1107 09:49:57.983261   18392 provision.go:86] duration metric: configureAuth took 336.782393ms
	I1107 09:49:57.983275   18392 ubuntu.go:193] setting minikube options for container-runtime
	I1107 09:49:57.983433   18392 config.go:180] Loaded profile config "embed-certs-094848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:49:57.983526   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:58.041447   18392 main.go:134] libmachine: Using SSH client type: native
	I1107 09:49:58.041618   18392 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54132 <nil> <nil>}
	I1107 09:49:58.041628   18392 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 09:49:58.159225   18392 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 09:49:58.159239   18392 ubuntu.go:71] root file system type: overlay
	I1107 09:49:58.159454   18392 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 09:49:58.159559   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:58.216842   18392 main.go:134] libmachine: Using SSH client type: native
	I1107 09:49:58.216999   18392 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54132 <nil> <nil>}
	I1107 09:49:58.217049   18392 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 09:49:58.344460   18392 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 09:49:58.344587   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:58.402637   18392 main.go:134] libmachine: Using SSH client type: native
	I1107 09:49:58.402820   18392 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 54132 <nil> <nil>}
	I1107 09:49:58.402833   18392 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 09:49:58.525470   18392 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 09:49:58.525486   18392 machine.go:91] provisioned docker machine in 1.25021112s
	I1107 09:49:58.525498   18392 start.go:300] post-start starting for "embed-certs-094848" (driver="docker")
	I1107 09:49:58.525504   18392 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 09:49:58.525586   18392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 09:49:58.525652   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:58.582733   18392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54132 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/embed-certs-094848/id_rsa Username:docker}
	I1107 09:49:58.671283   18392 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 09:49:58.674642   18392 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 09:49:58.674658   18392 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 09:49:58.674668   18392 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 09:49:58.674674   18392 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 09:49:58.674682   18392 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 09:49:58.674771   18392 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 09:49:58.674931   18392 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 09:49:58.675109   18392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 09:49:58.681944   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:49:58.699130   18392 start.go:303] post-start completed in 173.617625ms
	I1107 09:49:58.699196   18392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:49:58.699266   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:58.756589   18392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54132 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/embed-certs-094848/id_rsa Username:docker}
	I1107 09:49:58.837491   18392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 09:49:58.841643   18392 fix.go:57] fixHost completed within 2.129129638s
	I1107 09:49:58.841654   18392 start.go:83] releasing machines lock for "embed-certs-094848", held for 2.129168468s
	I1107 09:49:58.841741   18392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-094848
	I1107 09:49:58.899180   18392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 09:49:58.899200   18392 ssh_runner.go:195] Run: systemctl --version
	I1107 09:49:58.899269   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:58.899271   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:49:58.961449   18392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54132 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/embed-certs-094848/id_rsa Username:docker}
	I1107 09:49:58.961587   18392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54132 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/embed-certs-094848/id_rsa Username:docker}
	I1107 09:49:59.103125   18392 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 09:49:59.113249   18392 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 09:49:59.113324   18392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 09:49:59.125710   18392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 09:49:59.138762   18392 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 09:49:59.207530   18392 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 09:49:59.275719   18392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:49:59.340720   18392 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 09:49:59.575165   18392 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 09:49:59.632856   18392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 09:49:59.707803   18392 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 09:49:59.718328   18392 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 09:49:59.718420   18392 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 09:49:59.722375   18392 start.go:472] Will wait 60s for crictl version
	I1107 09:49:59.722433   18392 ssh_runner.go:195] Run: sudo crictl version
	I1107 09:49:59.823378   18392 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 09:49:59.823469   18392 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:49:59.850993   18392 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 09:49:59.922405   18392 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 09:49:59.922641   18392 cli_runner.go:164] Run: docker exec -t embed-certs-094848 dig +short host.docker.internal
	I1107 09:50:00.037622   18392 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 09:50:00.037732   18392 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 09:50:00.042069   18392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:50:00.051759   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:50:00.109224   18392 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 09:50:00.109316   18392 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:50:00.132936   18392 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 09:50:00.132952   18392 docker.go:543] Images already preloaded, skipping extraction
	I1107 09:50:00.133053   18392 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 09:50:00.156431   18392 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1107 09:50:00.156453   18392 cache_images.go:84] Images are preloaded, skipping loading
	I1107 09:50:00.156564   18392 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 09:50:00.224807   18392 cni.go:95] Creating CNI manager for ""
	I1107 09:50:00.224821   18392 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:50:00.224836   18392 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 09:50:00.224857   18392 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-094848 NodeName:embed-certs-094848 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 09:50:00.224971   18392 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-094848"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 09:50:00.225061   18392 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-094848 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:embed-certs-094848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 09:50:00.225135   18392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 09:50:00.232716   18392 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 09:50:00.232779   18392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 09:50:00.239734   18392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I1107 09:50:00.252094   18392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 09:50:00.264469   18392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2040 bytes)
	I1107 09:50:00.277057   18392 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 09:50:00.280793   18392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 09:50:00.290351   18392 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848 for IP: 192.168.67.2
	I1107 09:50:00.290469   18392 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 09:50:00.290524   18392 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 09:50:00.290612   18392 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/client.key
	I1107 09:50:00.290682   18392 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/apiserver.key.c7fa3a9e
	I1107 09:50:00.290740   18392 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/proxy-client.key
	I1107 09:50:00.290956   18392 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 09:50:00.290995   18392 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 09:50:00.291007   18392 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 09:50:00.291046   18392 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 09:50:00.291086   18392 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 09:50:00.291121   18392 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 09:50:00.291198   18392 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 09:50:00.291859   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 09:50:00.308835   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 09:50:00.326329   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 09:50:00.343303   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/embed-certs-094848/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 09:50:00.360423   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 09:50:00.377581   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 09:50:00.395252   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 09:50:00.413377   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 09:50:00.431009   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 09:50:00.448433   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 09:50:00.466024   18392 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 09:50:00.482825   18392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 09:50:00.495343   18392 ssh_runner.go:195] Run: openssl version
	I1107 09:50:00.500471   18392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 09:50:00.508224   18392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:50:00.512417   18392 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:50:00.512463   18392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 09:50:00.517514   18392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 09:50:00.524818   18392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 09:50:00.532716   18392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 09:50:00.536549   18392 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 09:50:00.536603   18392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 09:50:00.541713   18392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 09:50:00.549197   18392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 09:50:00.556989   18392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 09:50:00.560878   18392 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 09:50:00.560930   18392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 09:50:00.565832   18392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 09:50:00.573379   18392 kubeadm.go:396] StartCluster: {Name:embed-certs-094848 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-094848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 09:50:00.573501   18392 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:50:00.595906   18392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 09:50:00.603490   18392 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 09:50:00.603503   18392 kubeadm.go:627] restartCluster start
	I1107 09:50:00.603555   18392 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 09:50:00.610373   18392 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:00.610448   18392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-094848
	I1107 09:50:00.668276   18392 kubeconfig.go:135] verify returned: extract IP: "embed-certs-094848" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 09:50:00.668442   18392 kubeconfig.go:146] "embed-certs-094848" context is missing from /Users/jenkins/minikube-integration/15310-2115/kubeconfig - will repair!
	I1107 09:50:00.668767   18392 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 09:50:00.670129   18392 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 09:50:00.677949   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:00.678025   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:00.686303   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:00.888433   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:00.888682   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:00.899884   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:01.088011   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:01.088232   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:01.098406   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:01.288463   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:01.288612   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:01.299391   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:01.488452   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:01.488681   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:01.499487   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:01.688632   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:01.688772   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:01.699819   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:01.886438   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:01.886515   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:01.895867   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:02.088484   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:02.088652   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:02.099044   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:02.288471   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:02.288722   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:02.299207   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:02.486842   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:02.487040   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:02.497683   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:02.686687   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:02.686786   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:02.698068   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:02.888529   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:02.888777   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:02.899244   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.088682   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:03.088796   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:03.100125   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.288559   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:03.288677   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:03.300088   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.487373   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:03.487536   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:03.498041   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.688589   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:03.688772   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:03.700297   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.700307   18392 api_server.go:165] Checking apiserver status ...
	I1107 09:50:03.700354   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 09:50:03.709062   18392 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.709074   18392 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1107 09:50:03.709082   18392 kubeadm.go:1114] stopping kube-system containers ...
	I1107 09:50:03.709162   18392 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 09:50:03.734149   18392 docker.go:444] Stopping containers: [da13fe6b4a67 068750857d50 f497c8e1e8d2 980d9a76972f 50e92c91b7a4 439c06ab0b09 4e5e77c6bf4c fa3d5e5e9975 f93f2eaefd21 51c277609afd 3f3670ea4ab3 1774b9a3bb85 0e7247854792 9986fad47035 e25bd52eb940 0f509d1ef0e2]
	I1107 09:50:03.734243   18392 ssh_runner.go:195] Run: docker stop da13fe6b4a67 068750857d50 f497c8e1e8d2 980d9a76972f 50e92c91b7a4 439c06ab0b09 4e5e77c6bf4c fa3d5e5e9975 f93f2eaefd21 51c277609afd 3f3670ea4ab3 1774b9a3bb85 0e7247854792 9986fad47035 e25bd52eb940 0f509d1ef0e2
	I1107 09:50:03.758110   18392 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 09:50:03.768865   18392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:50:03.776646   18392 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  7 17:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  7 17:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Nov  7 17:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  7 17:49 /etc/kubernetes/scheduler.conf
	
	I1107 09:50:03.776705   18392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 09:50:03.783929   18392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 09:50:03.791503   18392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 09:50:03.798962   18392 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.799017   18392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 09:50:03.806649   18392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 09:50:03.813563   18392 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:50:03.813619   18392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 09:50:03.820758   18392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 09:50:03.828481   18392 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 09:50:03.828490   18392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:50:03.879723   18392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:50:04.584319   18392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:50:04.716883   18392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:50:04.767213   18392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:50:04.821339   18392 api_server.go:51] waiting for apiserver process to appear ...
	I1107 09:50:04.821411   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:50:05.358760   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:50:05.858863   18392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:50:05.871618   18392 api_server.go:71] duration metric: took 1.05025377s to wait for apiserver process to appear ...
	I1107 09:50:05.871635   18392 api_server.go:87] waiting for apiserver healthz status ...
	I1107 09:50:05.871651   18392 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54131/healthz ...
	I1107 09:50:05.873265   18392 api_server.go:268] stopped: https://127.0.0.1:54131/healthz: Get "https://127.0.0.1:54131/healthz": EOF
	I1107 09:50:04.691249   17733 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:50:04.691634   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:04.691818   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:50:06.373548   18392 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54131/healthz ...
	I1107 09:50:09.132386   18392 api_server.go:278] https://127.0.0.1:54131/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 09:50:09.132413   18392 api_server.go:102] status: https://127.0.0.1:54131/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 09:50:09.375172   18392 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54131/healthz ...
	I1107 09:50:09.381300   18392 api_server.go:278] https://127.0.0.1:54131/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:50:09.381316   18392 api_server.go:102] status: https://127.0.0.1:54131/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:50:09.874080   18392 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54131/healthz ...
	I1107 09:50:09.880017   18392 api_server.go:278] https://127.0.0.1:54131/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 09:50:09.880033   18392 api_server.go:102] status: https://127.0.0.1:54131/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 09:50:10.374830   18392 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54131/healthz ...
	I1107 09:50:10.382747   18392 api_server.go:278] https://127.0.0.1:54131/healthz returned 200:
	ok
	I1107 09:50:10.389279   18392 api_server.go:140] control plane version: v1.25.3
	I1107 09:50:10.389290   18392 api_server.go:130] duration metric: took 4.517515492s to wait for apiserver health ...
	I1107 09:50:10.389299   18392 cni.go:95] Creating CNI manager for ""
	I1107 09:50:10.389311   18392 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 09:50:10.389325   18392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 09:50:10.396093   18392 system_pods.go:59] 8 kube-system pods found
	I1107 09:50:10.396107   18392 system_pods.go:61] "coredns-565d847f94-rt6rq" [2c783a79-74cd-4da6-856f-75679a35c2ba] Running
	I1107 09:50:10.396111   18392 system_pods.go:61] "etcd-embed-certs-094848" [1be7224b-738c-4e13-b381-d7ae6e2395c9] Running
	I1107 09:50:10.396115   18392 system_pods.go:61] "kube-apiserver-embed-certs-094848" [57134c5e-c1e9-4af0-9bf4-9eb7109387f0] Running
	I1107 09:50:10.396120   18392 system_pods.go:61] "kube-controller-manager-embed-certs-094848" [ed37e668-bb0d-4f58-9820-276b470e977d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 09:50:10.396125   18392 system_pods.go:61] "kube-proxy-m256r" [af95ed90-c259-4791-9811-179b24101f40] Running
	I1107 09:50:10.396130   18392 system_pods.go:61] "kube-scheduler-embed-certs-094848" [6b96908d-0ff6-49d0-938d-3afd9a5c596a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 09:50:10.396140   18392 system_pods.go:61] "metrics-server-5c8fd5cf8-2xkr2" [26401f9f-2176-4a67-986b-fd3604f4eb06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 09:50:10.396144   18392 system_pods.go:61] "storage-provisioner" [a833d7b5-deb2-4f2f-a285-4af73988b591] Running
	I1107 09:50:10.396159   18392 system_pods.go:74] duration metric: took 6.827551ms to wait for pod list to return data ...
	I1107 09:50:10.396167   18392 node_conditions.go:102] verifying NodePressure condition ...
	I1107 09:50:10.399208   18392 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 09:50:10.399225   18392 node_conditions.go:123] node cpu capacity is 6
	I1107 09:50:10.399239   18392 node_conditions.go:105] duration metric: took 3.067552ms to run NodePressure ...
	I1107 09:50:10.399260   18392 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 09:50:10.573679   18392 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 09:50:10.578351   18392 kubeadm.go:778] kubelet initialised
	I1107 09:50:10.578362   18392 kubeadm.go:779] duration metric: took 4.668052ms waiting for restarted kubelet to initialise ...
	I1107 09:50:10.578373   18392 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 09:50:10.583315   18392 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-rt6rq" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:10.588788   18392 pod_ready.go:92] pod "coredns-565d847f94-rt6rq" in "kube-system" namespace has status "Ready":"True"
	I1107 09:50:10.588798   18392 pod_ready.go:81] duration metric: took 5.469443ms waiting for pod "coredns-565d847f94-rt6rq" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:10.588808   18392 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:10.593883   18392 pod_ready.go:92] pod "etcd-embed-certs-094848" in "kube-system" namespace has status "Ready":"True"
	I1107 09:50:10.593896   18392 pod_ready.go:81] duration metric: took 5.08101ms waiting for pod "etcd-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:10.593903   18392 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:10.599167   18392 pod_ready.go:92] pod "kube-apiserver-embed-certs-094848" in "kube-system" namespace has status "Ready":"True"
	I1107 09:50:10.599177   18392 pod_ready.go:81] duration metric: took 5.262184ms waiting for pod "kube-apiserver-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:10.599184   18392 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:09.689998   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:09.690210   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:50:12.799744   18392 pod_ready.go:102] pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:15.300731   18392 pod_ready.go:102] pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:17.301116   18392 pod_ready.go:102] pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:19.798482   18392 pod_ready.go:102] pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:19.684628   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:19.684860   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:50:21.798874   18392 pod_ready.go:102] pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:23.799202   18392 pod_ready.go:92] pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace has status "Ready":"True"
	I1107 09:50:23.799215   18392 pod_ready.go:81] duration metric: took 13.199634401s waiting for pod "kube-controller-manager-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:23.799221   18392 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m256r" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:23.803755   18392 pod_ready.go:92] pod "kube-proxy-m256r" in "kube-system" namespace has status "Ready":"True"
	I1107 09:50:23.803763   18392 pod_ready.go:81] duration metric: took 4.536687ms waiting for pod "kube-proxy-m256r" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:23.803776   18392 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:23.807861   18392 pod_ready.go:92] pod "kube-scheduler-embed-certs-094848" in "kube-system" namespace has status "Ready":"True"
	I1107 09:50:23.807869   18392 pod_ready.go:81] duration metric: took 4.087343ms waiting for pod "kube-scheduler-embed-certs-094848" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:23.807876   18392 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace to be "Ready" ...
	I1107 09:50:25.816743   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:27.820879   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:30.321131   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:32.818803   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:34.819511   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:36.821155   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:39.326269   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:39.672865   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:50:39.673065   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:50:41.821636   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:44.319276   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:46.321488   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:48.819454   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:51.319457   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:53.321493   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:55.321695   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:57.820074   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:50:59.821844   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:02.320018   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:04.320163   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:06.321713   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:08.322268   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:10.820392   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:13.322402   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:15.822104   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:19.648322   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:51:19.648492   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:51:19.648503   17733 kubeadm.go:317] 
	I1107 09:51:19.648530   17733 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:51:19.648560   17733 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:51:19.648565   17733 kubeadm.go:317] 
	I1107 09:51:19.648598   17733 kubeadm.go:317] This error is likely caused by:
	I1107 09:51:19.648622   17733 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:51:19.648697   17733 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:51:19.648709   17733 kubeadm.go:317] 
	I1107 09:51:19.648791   17733 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:51:19.648819   17733 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:51:19.648852   17733 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:51:19.648862   17733 kubeadm.go:317] 
	I1107 09:51:19.648941   17733 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:51:19.649019   17733 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:51:19.649088   17733 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:51:19.649123   17733 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:51:19.649174   17733 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:51:19.649197   17733 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:51:19.651759   17733 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:51:19.651871   17733 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:51:19.651951   17733 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:51:19.652038   17733 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:51:19.652123   17733 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1107 09:51:19.652253   17733 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 09:51:19.652281   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1107 09:51:20.069588   17733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:51:20.080241   17733 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 09:51:20.080306   17733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 09:51:20.087749   17733 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 09:51:20.087774   17733 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 09:51:20.135336   17733 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1107 09:51:20.135375   17733 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 09:51:20.437748   17733 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 09:51:20.437851   17733 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 09:51:20.437936   17733 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 09:51:20.653891   17733 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 09:51:20.654964   17733 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 09:51:20.661489   17733 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1107 09:51:20.734650   17733 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 09:51:18.320100   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:20.819339   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:20.777321   17733 out.go:204]   - Generating certificates and keys ...
	I1107 09:51:20.777393   17733 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 09:51:20.777463   17733 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 09:51:20.777558   17733 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 09:51:20.777667   17733 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1107 09:51:20.777794   17733 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 09:51:20.777876   17733 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1107 09:51:20.777939   17733 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1107 09:51:20.778025   17733 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1107 09:51:20.778112   17733 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 09:51:20.778173   17733 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 09:51:20.778243   17733 kubeadm.go:317] [certs] Using the existing "sa" key
	I1107 09:51:20.778287   17733 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 09:51:21.051028   17733 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 09:51:21.236036   17733 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 09:51:21.353889   17733 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 09:51:21.530808   17733 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 09:51:21.531461   17733 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 09:51:21.552917   17733 out.go:204]   - Booting up control plane ...
	I1107 09:51:21.553074   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 09:51:21.553250   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 09:51:21.553373   17733 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 09:51:21.553510   17733 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 09:51:21.553802   17733 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 09:51:22.819493   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:24.820958   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:26.821170   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:29.320846   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:31.322093   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:33.820619   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:35.821449   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:38.321476   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:40.819996   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:42.820368   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:45.321352   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:47.823775   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:50.322486   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:52.820051   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:54.825533   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:57.320258   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:51:59.321767   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:01.512622   17733 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1107 09:52:01.513359   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:01.513737   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:52:01.819681   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:03.822657   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:06.512067   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:06.512292   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:52:06.320283   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:08.320817   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:10.821515   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:13.323419   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:15.823291   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:16.505926   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:16.506105   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:52:18.320491   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:20.321947   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:22.323577   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:24.822428   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:26.823204   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:29.326365   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:31.326577   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:33.823007   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:35.823972   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:36.492667   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:52:36.492818   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:52:38.321318   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:40.821445   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:42.825131   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:45.321675   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:47.322349   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:49.821314   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:51.825560   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:54.322350   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:56.822340   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:52:58.823845   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:01.322252   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:03.322706   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:05.322875   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:07.323457   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:09.822902   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:12.322650   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:14.824292   18392 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-2xkr2" in "kube-system" namespace has status "Ready":"False"
	I1107 09:53:16.467026   17733 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 09:53:16.467241   17733 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 09:53:16.467265   17733 kubeadm.go:317] 
	I1107 09:53:16.467322   17733 kubeadm.go:317] Unfortunately, an error has occurred:
	I1107 09:53:16.467373   17733 kubeadm.go:317] 	timed out waiting for the condition
	I1107 09:53:16.467379   17733 kubeadm.go:317] 
	I1107 09:53:16.467427   17733 kubeadm.go:317] This error is likely caused by:
	I1107 09:53:16.467469   17733 kubeadm.go:317] 	- The kubelet is not running
	I1107 09:53:16.467571   17733 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 09:53:16.467580   17733 kubeadm.go:317] 
	I1107 09:53:16.467709   17733 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 09:53:16.467747   17733 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1107 09:53:16.467784   17733 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1107 09:53:16.467797   17733 kubeadm.go:317] 
	I1107 09:53:16.467904   17733 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 09:53:16.467994   17733 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1107 09:53:16.468074   17733 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1107 09:53:16.468118   17733 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1107 09:53:16.468188   17733 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1107 09:53:16.468222   17733 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1107 09:53:16.470825   17733 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 09:53:16.470942   17733 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1107 09:53:16.471042   17733 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 09:53:16.471118   17733 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 09:53:16.471197   17733 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1107 09:53:16.471210   17733 kubeadm.go:398] StartCluster complete in 7m57.861468859s
	I1107 09:53:16.471307   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 09:53:16.494433   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.494445   17733 logs.go:276] No container was found matching "kube-apiserver"
	I1107 09:53:16.494525   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 09:53:16.517244   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.517256   17733 logs.go:276] No container was found matching "etcd"
	I1107 09:53:16.517337   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 09:53:16.540587   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.540600   17733 logs.go:276] No container was found matching "coredns"
	I1107 09:53:16.540681   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 09:53:16.563465   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.563478   17733 logs.go:276] No container was found matching "kube-scheduler"
	I1107 09:53:16.563557   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 09:53:16.586980   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.586993   17733 logs.go:276] No container was found matching "kube-proxy"
	I1107 09:53:16.587074   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 09:53:16.609201   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.609214   17733 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 09:53:16.609298   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 09:53:16.631665   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.631677   17733 logs.go:276] No container was found matching "storage-provisioner"
	I1107 09:53:16.631760   17733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 09:53:16.653649   17733 logs.go:274] 0 containers: []
	W1107 09:53:16.653661   17733 logs.go:276] No container was found matching "kube-controller-manager"
	I1107 09:53:16.653669   17733 logs.go:123] Gathering logs for kubelet ...
	I1107 09:53:16.653676   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 09:53:16.694973   17733 logs.go:123] Gathering logs for dmesg ...
	I1107 09:53:16.694985   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 09:53:16.706841   17733 logs.go:123] Gathering logs for describe nodes ...
	I1107 09:53:16.706853   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 09:53:16.763729   17733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 09:53:16.763742   17733 logs.go:123] Gathering logs for Docker ...
	I1107 09:53:16.763749   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 09:53:16.780034   17733 logs.go:123] Gathering logs for container status ...
	I1107 09:53:16.780049   17733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 09:53:18.829381   17733 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049259356s)
	W1107 09:53:18.829500   17733 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 09:53:18.829514   17733 out.go:239] * 
	W1107 09:53:18.829654   17733 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:53:18.829672   17733 out.go:239] * 
	W1107 09:53:18.830335   17733 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 09:53:18.894853   17733 out.go:177] 
	W1107 09:53:18.937524   17733 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 09:53:18.959102   17733 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 09:53:18.959173   17733 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 09:53:19.021733   17733 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:45:15 UTC, end at Mon 2022-11-07 17:53:20 UTC. --
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.642163935Z" level=info msg="Processing signal 'terminated'"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643228381Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643875758Z" level=info msg="Daemon shutdown complete"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643949836Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: docker.service: Succeeded.
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Stopped Docker Application Container Engine.
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Starting Docker Application Container Engine...
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.695361650Z" level=info msg="Starting up"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696807685Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696840105Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696863638Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696873330Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698141384Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698178354Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698194160Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698202660Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.701972523Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.705958338Z" level=info msg="Loading containers: start."
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.781668503Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.812176137Z" level=info msg="Loading containers: done."
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.822689120Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.822804604Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.843870638Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.850307657Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-11-07T17:53:22Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  17:53:22 up  1:22,  0 users,  load average: 0.73, 0.90, 0.98
	Linux old-k8s-version-093929 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:45:15 UTC, end at Mon 2022-11-07 17:53:23 UTC. --
	Nov 07 17:53:21 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 17:53:21 old-k8s-version-093929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Nov 07 17:53:21 old-k8s-version-093929 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 17:53:21 old-k8s-version-093929 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14530]: I1107 17:53:22.065933   14530 server.go:410] Version: v1.16.0
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14530]: I1107 17:53:22.066409   14530 plugins.go:100] No cloud provider specified.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14530]: I1107 17:53:22.066454   14530 server.go:773] Client rotation is on, will bootstrap in background
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14530]: I1107 17:53:22.068231   14530 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14530]: W1107 17:53:22.068912   14530 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14530]: W1107 17:53:22.068977   14530 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14530]: F1107 17:53:22.069006   14530 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 07 17:53:22 old-k8s-version-093929 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 07 17:53:22 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 17:53:22 old-k8s-version-093929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Nov 07 17:53:22 old-k8s-version-093929 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 17:53:22 old-k8s-version-093929 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14563]: I1107 17:53:22.812542   14563 server.go:410] Version: v1.16.0
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14563]: I1107 17:53:22.812894   14563 plugins.go:100] No cloud provider specified.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14563]: I1107 17:53:22.812928   14563 server.go:773] Client rotation is on, will bootstrap in background
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14563]: I1107 17:53:22.814657   14563 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14563]: W1107 17:53:22.815380   14563 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14563]: W1107 17:53:22.815446   14563 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 07 17:53:22 old-k8s-version-093929 kubelet[14563]: F1107 17:53:22.815473   14563 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 07 17:53:22 old-k8s-version-093929 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 07 17:53:22 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:53:22.668731   18677 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (396.684387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-093929" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (489.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1107 09:53:30.983767    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:53:39.793447    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:54:05.986471    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:54:07.777193    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:54:16.352285    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:54:20.754795    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:54:27.541439    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:54:44.731034    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:55:12.799140    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:55:38.326182    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:55:42.677846    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:55:50.589356    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:57:20.693584    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:57:35.483468    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:57:53.311401    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:57:58.836255    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:58:03.261875    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:58:21.681111    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:58:26.522974    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:58:30.994712    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 09:58:41.389931    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:58:43.747567    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:59:05.994697    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:59:07.788165    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 09:59:26.354942    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 09:59:27.550381    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:00:12.809912    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:00:29.054201    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 10:00:30.844463    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:00:38.335522    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:01:12.439097    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:02:20.702321    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (409.127358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-093929" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-093929
helpers_test.go:235: (dbg) docker inspect old-k8s-version-093929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd",
	        "Created": "2022-11-07T17:39:36.809249754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:45:15.193119617Z",
	            "FinishedAt": "2022-11-07T17:45:12.325790936Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd-json.log",
	        "Name": "/old-k8s-version-093929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-093929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093929",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093929",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "78b2525896e2425ce76d453e029d9934dbd9eef1f99a4534a067bd3dedbbaf31",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53971"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53973"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/78b2525896e2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-093929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "50811d3bbfa7",
	                        "old-k8s-version-093929"
	                    ],
	                    "NetworkID": "85b1c6253454469ed38e54ce96d4baef11c9b5b3afd90032e806121a14971f03",
	                    "EndpointID": "0eb940817c54781bdb3cdfa6365fbd23635a65c83ea00240310b1565886e76f0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (428.131347ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-093929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-093929 logs -n 25: (3.415328436s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-094848                | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:49 PST | 07 Nov 22 09:49 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:49 PST | 07 Nov 22 09:49 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-094848                     | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:49 PST | 07 Nov 22 09:49 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:49 PST | 07 Nov 22 09:54 PST |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-094848 sudo                                 | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	| delete  | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	| delete  | -p                                                         | disable-driver-mounts-095521 | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	|         | disable-driver-mounts-095521                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:56 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 09:56 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 09:56 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-095521           | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 09:56 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-100155 --memory=2200 --alsologtostderr       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:02 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-100155                 | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST | 07 Nov 22 10:02 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-100155                                       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST | 07 Nov 22 10:02 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-100155                      | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST | 07 Nov 22 10:02 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-100155 --memory=2200 --alsologtostderr       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST |                     |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 10:02:50
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 10:02:50.101130   19943 out.go:296] Setting OutFile to fd 1 ...
	I1107 10:02:50.101323   19943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 10:02:50.101329   19943 out.go:309] Setting ErrFile to fd 2...
	I1107 10:02:50.101337   19943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 10:02:50.101453   19943 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 10:02:50.101954   19943 out.go:303] Setting JSON to false
	I1107 10:02:50.121007   19943 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5545,"bootTime":1667838625,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 10:02:50.121099   19943 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 10:02:50.143367   19943 out.go:177] * [newest-cni-100155] minikube v1.28.0 on Darwin 13.0
	I1107 10:02:50.185131   19943 notify.go:220] Checking for updates...
	I1107 10:02:50.206881   19943 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 10:02:50.228090   19943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 10:02:50.249372   19943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 10:02:50.270921   19943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 10:02:50.292331   19943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 10:02:50.314891   19943 config.go:180] Loaded profile config "newest-cni-100155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 10:02:50.315565   19943 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 10:02:50.378558   19943 docker.go:137] docker version: linux-20.10.20
	I1107 10:02:50.378701   19943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 10:02:50.519336   19943 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 18:02:50.448380495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 10:02:50.562962   19943 out.go:177] * Using the docker driver based on existing profile
	I1107 10:02:50.583967   19943 start.go:282] selected driver: docker
	I1107 10:02:50.583996   19943 start.go:808] validating driver "docker" against &{Name:newest-cni-100155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-100155 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 10:02:50.584124   19943 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 10:02:50.587944   19943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 10:02:50.729146   19943 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 18:02:50.658602475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 10:02:50.729301   19943 start_flags.go:920] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1107 10:02:50.729317   19943 cni.go:95] Creating CNI manager for ""
	I1107 10:02:50.729326   19943 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 10:02:50.729339   19943 start_flags.go:317] config:
	{Name:newest-cni-100155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-100155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 10:02:50.771990   19943 out.go:177] * Starting control plane node newest-cni-100155 in cluster newest-cni-100155
	I1107 10:02:50.795344   19943 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 10:02:50.816886   19943 out.go:177] * Pulling base image ...
	I1107 10:02:50.842075   19943 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 10:02:50.842135   19943 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 10:02:50.842147   19943 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 10:02:50.842162   19943 cache.go:57] Caching tarball of preloaded images
	I1107 10:02:50.842324   19943 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 10:02:50.842340   19943 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 10:02:50.842959   19943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/config.json ...
	I1107 10:02:50.901815   19943 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 10:02:50.901836   19943 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 10:02:50.901847   19943 cache.go:208] Successfully downloaded all kic artifacts
	I1107 10:02:50.901896   19943 start.go:364] acquiring machines lock for newest-cni-100155: {Name:mkcc9a28e3fcda77dd46714c5593fe02db6bacb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 10:02:50.901984   19943 start.go:368] acquired machines lock for "newest-cni-100155" in 68.581µs
	I1107 10:02:50.902009   19943 start.go:96] Skipping create...Using existing machine configuration
	I1107 10:02:50.902022   19943 fix.go:55] fixHost starting: 
	I1107 10:02:50.902298   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:02:50.959483   19943 fix.go:103] recreateIfNeeded on newest-cni-100155: state=Stopped err=<nil>
	W1107 10:02:50.959513   19943 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 10:02:51.004067   19943 out.go:177] * Restarting existing docker container for "newest-cni-100155" ...
	I1107 10:02:51.026499   19943 cli_runner.go:164] Run: docker start newest-cni-100155
	I1107 10:02:51.353838   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:02:51.414670   19943 kic.go:415] container "newest-cni-100155" state is running.
	I1107 10:02:51.415330   19943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-100155
	I1107 10:02:51.477803   19943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/config.json ...
	I1107 10:02:51.478340   19943 machine.go:88] provisioning docker machine ...
	I1107 10:02:51.478371   19943 ubuntu.go:169] provisioning hostname "newest-cni-100155"
	I1107 10:02:51.478485   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:51.539349   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:51.539542   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:51.539557   19943 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-100155 && echo "newest-cni-100155" | sudo tee /etc/hostname
	I1107 10:02:51.671810   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-100155
	
	I1107 10:02:51.671930   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:51.730441   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:51.730597   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:51.730610   19943 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-100155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-100155/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-100155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 10:02:51.846912   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 10:02:51.846935   19943 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 10:02:51.846968   19943 ubuntu.go:177] setting up certificates
	I1107 10:02:51.846978   19943 provision.go:83] configureAuth start
	I1107 10:02:51.847072   19943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-100155
	I1107 10:02:51.905302   19943 provision.go:138] copyHostCerts
	I1107 10:02:51.905412   19943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 10:02:51.905423   19943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 10:02:51.905542   19943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 10:02:51.907118   19943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 10:02:51.907133   19943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 10:02:51.907260   19943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 10:02:51.907553   19943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 10:02:51.907559   19943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 10:02:51.907634   19943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 10:02:51.907827   19943 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.newest-cni-100155 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-100155]
	I1107 10:02:52.073360   19943 provision.go:172] copyRemoteCerts
	I1107 10:02:52.073429   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 10:02:52.073499   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.133580   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:52.221899   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 10:02:52.238870   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1107 10:02:52.255937   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 10:02:52.272955   19943 provision.go:86] duration metric: configureAuth took 425.951249ms
	I1107 10:02:52.272969   19943 ubuntu.go:193] setting minikube options for container-runtime
	I1107 10:02:52.273128   19943 config.go:180] Loaded profile config "newest-cni-100155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 10:02:52.273204   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.330050   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:52.330216   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:52.330225   19943 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 10:02:52.447786   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 10:02:52.447798   19943 ubuntu.go:71] root file system type: overlay
	I1107 10:02:52.447943   19943 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 10:02:52.448045   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.505073   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:52.505223   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:52.505273   19943 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 10:02:52.632203   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 10:02:52.632326   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.688756   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:52.688912   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:52.688925   19943 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 10:02:52.813641   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 10:02:52.813657   19943 machine.go:91] provisioned docker machine in 1.33526649s
	I1107 10:02:52.813683   19943 start.go:300] post-start starting for "newest-cni-100155" (driver="docker")
	I1107 10:02:52.813691   19943 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 10:02:52.813767   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 10:02:52.813828   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.870980   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:52.957896   19943 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 10:02:52.961291   19943 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 10:02:52.961306   19943 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 10:02:52.961324   19943 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 10:02:52.961333   19943 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 10:02:52.961342   19943 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 10:02:52.961440   19943 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 10:02:52.961625   19943 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 10:02:52.961842   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 10:02:52.968877   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 10:02:52.986078   19943 start.go:303] post-start completed in 172.377025ms
	I1107 10:02:52.986168   19943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 10:02:52.986254   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:53.045189   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:53.127346   19943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 10:02:53.132522   19943 fix.go:57] fixHost completed within 2.230435435s
	I1107 10:02:53.132535   19943 start.go:83] releasing machines lock for "newest-cni-100155", held for 2.230476974s
	I1107 10:02:53.132647   19943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-100155
	I1107 10:02:53.190390   19943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 10:02:53.190410   19943 ssh_runner.go:195] Run: systemctl --version
	I1107 10:02:53.190476   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:53.190482   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:53.249862   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:53.250261   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:53.336077   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 10:02:53.391725   19943 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1107 10:02:53.405145   19943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 10:02:53.471552   19943 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 10:02:53.558407   19943 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 10:02:53.568683   19943 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 10:02:53.568755   19943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 10:02:53.578379   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 10:02:53.591196   19943 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 10:02:53.663593   19943 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 10:02:53.730187   19943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 10:02:53.795693   19943 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 10:02:54.057948   19943 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 10:02:54.133024   19943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 10:02:54.206963   19943 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 10:02:54.216843   19943 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 10:02:54.216926   19943 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 10:02:54.221485   19943 start.go:472] Will wait 60s for crictl version
	I1107 10:02:54.221562   19943 ssh_runner.go:195] Run: sudo crictl version
	I1107 10:02:54.252951   19943 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 10:02:54.253057   19943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 10:02:54.286441   19943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 10:02:54.341944   19943 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 10:02:54.342032   19943 cli_runner.go:164] Run: docker exec -t newest-cni-100155 dig +short host.docker.internal
	I1107 10:02:54.467478   19943 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 10:02:54.467617   19943 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 10:02:54.472246   19943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 10:02:54.483258   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:54.561782   19943 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:45:15 UTC, end at Mon 2022-11-07 18:02:55 UTC. --
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.642163935Z" level=info msg="Processing signal 'terminated'"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643228381Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643875758Z" level=info msg="Daemon shutdown complete"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643949836Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: docker.service: Succeeded.
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Stopped Docker Application Container Engine.
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Starting Docker Application Container Engine...
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.695361650Z" level=info msg="Starting up"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696807685Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696840105Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696863638Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696873330Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698141384Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698178354Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698194160Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698202660Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.701972523Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.705958338Z" level=info msg="Loading containers: start."
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.781668503Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.812176137Z" level=info msg="Loading containers: done."
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.822689120Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.822804604Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.843870638Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.850307657Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-11-07T18:02:57Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:02:57 up  1:32,  0 users,  load average: 0.45, 0.64, 0.86
	Linux old-k8s-version-093929 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:45:15 UTC, end at Mon 2022-11-07 18:02:57 UTC. --
	Nov 07 18:02:56 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 18:02:56 old-k8s-version-093929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Nov 07 18:02:56 old-k8s-version-093929 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 18:02:56 old-k8s-version-093929 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 18:02:56 old-k8s-version-093929 kubelet[24563]: I1107 18:02:56.920846   24563 server.go:410] Version: v1.16.0
	Nov 07 18:02:56 old-k8s-version-093929 kubelet[24563]: I1107 18:02:56.921408   24563 plugins.go:100] No cloud provider specified.
	Nov 07 18:02:56 old-k8s-version-093929 kubelet[24563]: I1107 18:02:56.921458   24563 server.go:773] Client rotation is on, will bootstrap in background
	Nov 07 18:02:56 old-k8s-version-093929 kubelet[24563]: I1107 18:02:56.923184   24563 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 07 18:02:56 old-k8s-version-093929 kubelet[24563]: W1107 18:02:56.923862   24563 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 07 18:02:56 old-k8s-version-093929 kubelet[24563]: W1107 18:02:56.923951   24563 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 07 18:02:56 old-k8s-version-093929 kubelet[24563]: F1107 18:02:56.924012   24563 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 07 18:02:56 old-k8s-version-093929 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 07 18:02:56 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 18:02:57 old-k8s-version-093929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Nov 07 18:02:57 old-k8s-version-093929 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 18:02:57 old-k8s-version-093929 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 18:02:57 old-k8s-version-093929 kubelet[24585]: I1107 18:02:57.661691   24585 server.go:410] Version: v1.16.0
	Nov 07 18:02:57 old-k8s-version-093929 kubelet[24585]: I1107 18:02:57.662079   24585 plugins.go:100] No cloud provider specified.
	Nov 07 18:02:57 old-k8s-version-093929 kubelet[24585]: I1107 18:02:57.662132   24585 server.go:773] Client rotation is on, will bootstrap in background
	Nov 07 18:02:57 old-k8s-version-093929 kubelet[24585]: I1107 18:02:57.663946   24585 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 07 18:02:57 old-k8s-version-093929 kubelet[24585]: W1107 18:02:57.664739   24585 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 07 18:02:57 old-k8s-version-093929 kubelet[24585]: W1107 18:02:57.665052   24585 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 07 18:02:57 old-k8s-version-093929 kubelet[24585]: F1107 18:02:57.665130   24585 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 07 18:02:57 old-k8s-version-093929 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 07 18:02:57 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 10:02:57.678329   20033 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (408.930169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-093929" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1107 10:02:58.843508    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 10:03:03.270567    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:03:21.689593    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:03:31.003015    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:04:06.001387    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 10:04:07.794189    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:04:27.560046    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:04:54.060658    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:05:12.815663    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:05:38.341128    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:06:06.785763    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:06.791114    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:06.801893    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:06.823520    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:06.865249    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:06.947534    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:07.108598    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:07.428817    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:08.071081    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:09.353179    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:11.915594    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
E1107 10:06:12.446289    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:06:17.037991    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:06:27.280566    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:06:47.762893    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:07:20.711302    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:07:28.724092    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:07:53.327435    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:07:58.851825    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 10:08:03.279012    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:08:21.698519    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:08:31.010106    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:08:50.648443    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:09:06.010771    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 10:09:07.801364    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:09:21.900292    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1107 10:09:27.567550    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1107 10:10:12.822833    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1107 10:10:38.348306    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1107 10:10:56.381279    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1107 10:11:06.793365    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1107 10:11:12.455539    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1107 10:11:24.757723    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1107 10:11:34.494967    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/default-k8s-diff-port-095521/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (391.188642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-093929" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-093929 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-093929 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.01µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-093929 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-093929
helpers_test.go:235: (dbg) docker inspect old-k8s-version-093929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd",
	        "Created": "2022-11-07T17:39:36.809249754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:45:15.193119617Z",
	            "FinishedAt": "2022-11-07T17:45:12.325790936Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd/50811d3bbfa74e9a1e5b508cd8b2301e651635c5a570c0b553fd954fe5ece8dd-json.log",
	        "Name": "/old-k8s-version-093929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-093929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad-init/diff:/var/lib/docker/overlay2/8ef76795356079208b1acef7376be67a28d951b743a50dd56a60b0d456568ae9/diff:/var/lib/docker/overlay2/f9288d2baad2a30057af35c115d2ebfb4650d5d1d798a60a2334facced392980/diff:/var/lib/docker/overlay2/270f6ca71b47e51691c54d669e6e8e86c321939c053498289406eab5aa0462f5/diff:/var/lib/docker/overlay2/ebe3fe002872a87a7cc54a77192a2ea1f0efb3730f887abec35652e72f152f46/diff:/var/lib/docker/overlay2/83c9d5ae9817ab2b318ad7ba44ade4fe9c22378e15e338b8fe94c5998fbac5c4/diff:/var/lib/docker/overlay2/6426b1d4e4f369bec5066b3c17c47f9c451787be596ba417de62155901d14061/diff:/var/lib/docker/overlay2/f409955dc1056669a5ee00fa64ecfa9733f3de1a92beefeeca73cba51d930189/diff:/var/lib/docker/overlay2/3ecb7ca97b99ba70c03450a3d6d4a4452c7e9e348eec3cf89e6e8ee51aba6a8b/diff:/var/lib/docker/overlay2/9dd8fffded9665b1b7a326cb2bb3e29e3b716cdba6544940490326ddcbfe2bda/diff:/var/lib/docker/overlay2/b43aed
d977d94230f77efb53c193c1a02895ea314fcdece500155052dfeb6b29/diff:/var/lib/docker/overlay2/ba3bd8f651e3503bd8eadf3ce01b8930edaf7eb6af4044593c756be0f3c5d03a/diff:/var/lib/docker/overlay2/359c64a8e323929352da8612c231ccf0f6be76af37c8a208a9ee98c3bce5e2a1/diff:/var/lib/docker/overlay2/868ec2aea7bce1a74dcdf6c7a708b34838e8c08e795aad6e5b974d1ab15b719c/diff:/var/lib/docker/overlay2/0438a0192165f11b19940586b456c07bfa31d015147b9d008aafaacc09fbc40c/diff:/var/lib/docker/overlay2/80a13b6491a8f9f1c0f6848a375575c20f50d592cb34f21491050776a56fca61/diff:/var/lib/docker/overlay2/dd29a4d45bcf60d3684330374a82b3f3bde4245c5d49661ffdd516cd0c0af260/diff:/var/lib/docker/overlay2/ef8c6936e45d238f2880da0d94945cb610fba8a9e38cdfb3ae6674a82a8f0480/diff:/var/lib/docker/overlay2/9934f45b2cecf953b6f56ee634f63c3dd99c8c358b74fee64fdc62cef64f7723/diff:/var/lib/docker/overlay2/f5ccdcf1811b84ddfcc2efdc07e5feefa2803c1fe476b6653b0a6af55c2e684f/diff:/var/lib/docker/overlay2/2b3b062a0d083aedf009b6c8dde21debe0396b301936ec1950364a1d0ef86b6d/diff:/var/lib/d
ocker/overlay2/db91c57bd6754e3dbdc6c234df413d494606d408e284454bf7ab30cd23f9e840/diff:/var/lib/docker/overlay2/6538f86ce38383e3a133480b44c25afa8b31a61935d6f87270e2cc139e424425/diff:/var/lib/docker/overlay2/80972648e2aa65675fe7f3de22feae57951c0092d5f963f2430650b071940bba/diff:/var/lib/docker/overlay2/19dc0f28f2a85362d2b586f65ab00efa8a97868656af9dc5911259dd3ca649ac/diff:/var/lib/docker/overlay2/99eff050eadab512f36f80d63e8b57d9aa45ef607d723d7ac3f20ece8310a758/diff:/var/lib/docker/overlay2/d6309ab08fa5212992e2b5125645ad32bce2940b50c5e8a5b72e7c7531eb80b4/diff:/var/lib/docker/overlay2/c4d3d6d4212753e50a5f68577281382a30773fb33ca98730aebdfd86d48f612c/diff:/var/lib/docker/overlay2/4292068e16912b59305479ae020d9aa923d57157c4a28dd11e69102be9c1541a/diff:/var/lib/docker/overlay2/2274c567eadc1a99c8173258b3794df0df44fd1abac0aaae2100133ad15b3f30/diff:/var/lib/docker/overlay2/e3bb447cc7563c5af39c4076a93bb7b33bd1a7c6c5ccef7fea2a6a99deddf9f3/diff:/var/lib/docker/overlay2/4329b8a4d7648d8e3bb46a144b9939a5026fa69e5ac188a778cf6ede21a
9627e/diff:/var/lib/docker/overlay2/b600639ff99f881a9eb993fd36e2faf1c0f88a869675ab9d8ec116efc2642784/diff:/var/lib/docker/overlay2/da083fbec4f2fa2681bbaaaa559fdcc46ec2a520e7b9ced39197e805a661fda3/diff:/var/lib/docker/overlay2/63848d00284d16d750a7e746c8be62f8c15819bc2fcb72297788f3c9647257e6/diff:/var/lib/docker/overlay2/3fd667008c6a5c1c5828bb4e003fc21c477a31c4d59b5b675a3886d8a7cb782d/diff:/var/lib/docker/overlay2/6b125cd950aed912fcc597ce8a96bbb5af3dbba111d6eb683ea981387e02e99d/diff:/var/lib/docker/overlay2/b4c672faa14a55ba585c6063024785d7913afc546dd6d04975591d2e13d7b52f/diff:/var/lib/docker/overlay2/c2c0287a05145a26d3313d4e33799ea96103a20115734a66a3c2af8fe728b170/diff:/var/lib/docker/overlay2/dba7b9788bd657997c8cee3b3ef21f9bc4ade7b5a0da25526255047311da571d/diff:/var/lib/docker/overlay2/1f3ae87b3ce804fde9f857de6cb225d5afa00aa39260d197d77f67e840e2d285/diff:/var/lib/docker/overlay2/603b72832425bade21ef2d76583dbe61a46ff7fbe7277673cbc6cd52cf7613dd/diff:/var/lib/docker/overlay2/a47793b1e0564c094c05134af06d2d46a6bcb7
6089b3836b831863ef51c21684/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39a041573ad3eca7ab697263fe10ea171795f26b07a4716ba8ee670c2bcbcaad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093929",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093929",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "78b2525896e2425ce76d453e029d9934dbd9eef1f99a4534a067bd3dedbbaf31",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53971"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53973"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/78b2525896e2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-093929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "50811d3bbfa7",
	                        "old-k8s-version-093929"
	                    ],
	                    "NetworkID": "85b1c6253454469ed38e54ce96d4baef11c9b5b3afd90032e806121a14971f03",
	                    "EndpointID": "0eb940817c54781bdb3cdfa6365fbd23635a65c83ea00240310b1565886e76f0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (386.380271ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-093929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-093929 logs -n 25: (3.400697906s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	| delete  | -p embed-certs-094848                                      | embed-certs-094848           | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	| delete  | -p                                                         | disable-driver-mounts-095521 | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:55 PST |
	|         | disable-driver-mounts-095521                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:55 PST | 07 Nov 22 09:56 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 09:56 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 09:56 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-095521           | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 09:56 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 09:56 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-095521 | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:01 PST |
	|         | default-k8s-diff-port-095521                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-100155 --memory=2200 --alsologtostderr       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:01 PST | 07 Nov 22 10:02 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-100155                 | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST | 07 Nov 22 10:02 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-100155                                       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST | 07 Nov 22 10:02 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-100155                      | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST | 07 Nov 22 10:02 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-100155 --memory=2200 --alsologtostderr       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:02 PST | 07 Nov 22 10:03 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-100155 sudo                                  | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:03 PST | 07 Nov 22 10:03 PST |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-100155                                       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:03 PST | 07 Nov 22 10:03 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-100155                                       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:03 PST | 07 Nov 22 10:03 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-100155                                       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:03 PST | 07 Nov 22 10:03 PST |
	| delete  | -p newest-cni-100155                                       | newest-cni-100155            | jenkins | v1.28.0 | 07 Nov 22 10:03 PST | 07 Nov 22 10:03 PST |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 10:02:50
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 10:02:50.101130   19943 out.go:296] Setting OutFile to fd 1 ...
	I1107 10:02:50.101323   19943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 10:02:50.101329   19943 out.go:309] Setting ErrFile to fd 2...
	I1107 10:02:50.101337   19943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 10:02:50.101453   19943 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 10:02:50.101954   19943 out.go:303] Setting JSON to false
	I1107 10:02:50.121007   19943 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5545,"bootTime":1667838625,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 10:02:50.121099   19943 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 10:02:50.143367   19943 out.go:177] * [newest-cni-100155] minikube v1.28.0 on Darwin 13.0
	I1107 10:02:50.185131   19943 notify.go:220] Checking for updates...
	I1107 10:02:50.206881   19943 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 10:02:50.228090   19943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 10:02:50.249372   19943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 10:02:50.270921   19943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 10:02:50.292331   19943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 10:02:50.314891   19943 config.go:180] Loaded profile config "newest-cni-100155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 10:02:50.315565   19943 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 10:02:50.378558   19943 docker.go:137] docker version: linux-20.10.20
	I1107 10:02:50.378701   19943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 10:02:50.519336   19943 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 18:02:50.448380495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 10:02:50.562962   19943 out.go:177] * Using the docker driver based on existing profile
	I1107 10:02:50.583967   19943 start.go:282] selected driver: docker
	I1107 10:02:50.583996   19943 start.go:808] validating driver "docker" against &{Name:newest-cni-100155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-100155 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 10:02:50.584124   19943 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 10:02:50.587944   19943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 10:02:50.729146   19943 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 18:02:50.658602475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 10:02:50.729301   19943 start_flags.go:920] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1107 10:02:50.729317   19943 cni.go:95] Creating CNI manager for ""
	I1107 10:02:50.729326   19943 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 10:02:50.729339   19943 start_flags.go:317] config:
	{Name:newest-cni-100155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-100155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 10:02:50.771990   19943 out.go:177] * Starting control plane node newest-cni-100155 in cluster newest-cni-100155
	I1107 10:02:50.795344   19943 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 10:02:50.816886   19943 out.go:177] * Pulling base image ...
	I1107 10:02:50.842075   19943 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 10:02:50.842135   19943 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 10:02:50.842147   19943 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 10:02:50.842162   19943 cache.go:57] Caching tarball of preloaded images
	I1107 10:02:50.842324   19943 preload.go:174] Found /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 10:02:50.842340   19943 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 10:02:50.842959   19943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/config.json ...
	I1107 10:02:50.901815   19943 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 10:02:50.901836   19943 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 10:02:50.901847   19943 cache.go:208] Successfully downloaded all kic artifacts
	I1107 10:02:50.901896   19943 start.go:364] acquiring machines lock for newest-cni-100155: {Name:mkcc9a28e3fcda77dd46714c5593fe02db6bacb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 10:02:50.901984   19943 start.go:368] acquired machines lock for "newest-cni-100155" in 68.581µs
	I1107 10:02:50.902009   19943 start.go:96] Skipping create...Using existing machine configuration
	I1107 10:02:50.902022   19943 fix.go:55] fixHost starting: 
	I1107 10:02:50.902298   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:02:50.959483   19943 fix.go:103] recreateIfNeeded on newest-cni-100155: state=Stopped err=<nil>
	W1107 10:02:50.959513   19943 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 10:02:51.004067   19943 out.go:177] * Restarting existing docker container for "newest-cni-100155" ...
	I1107 10:02:51.026499   19943 cli_runner.go:164] Run: docker start newest-cni-100155
	I1107 10:02:51.353838   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:02:51.414670   19943 kic.go:415] container "newest-cni-100155" state is running.
	I1107 10:02:51.415330   19943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-100155
	I1107 10:02:51.477803   19943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/config.json ...
	I1107 10:02:51.478340   19943 machine.go:88] provisioning docker machine ...
	I1107 10:02:51.478371   19943 ubuntu.go:169] provisioning hostname "newest-cni-100155"
	I1107 10:02:51.478485   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:51.539349   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:51.539542   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:51.539557   19943 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-100155 && echo "newest-cni-100155" | sudo tee /etc/hostname
	I1107 10:02:51.671810   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-100155
	
	I1107 10:02:51.671930   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:51.730441   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:51.730597   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:51.730610   19943 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-100155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-100155/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-100155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 10:02:51.846912   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 10:02:51.846935   19943 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
	I1107 10:02:51.846968   19943 ubuntu.go:177] setting up certificates
	I1107 10:02:51.846978   19943 provision.go:83] configureAuth start
	I1107 10:02:51.847072   19943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-100155
	I1107 10:02:51.905302   19943 provision.go:138] copyHostCerts
	I1107 10:02:51.905412   19943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
	I1107 10:02:51.905423   19943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
	I1107 10:02:51.905542   19943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
	I1107 10:02:51.907118   19943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
	I1107 10:02:51.907133   19943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
	I1107 10:02:51.907260   19943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
	I1107 10:02:51.907553   19943 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
	I1107 10:02:51.907559   19943 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
	I1107 10:02:51.907634   19943 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
	I1107 10:02:51.907827   19943 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.newest-cni-100155 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-100155]
	I1107 10:02:52.073360   19943 provision.go:172] copyRemoteCerts
	I1107 10:02:52.073429   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 10:02:52.073499   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.133580   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:52.221899   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 10:02:52.238870   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1107 10:02:52.255937   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 10:02:52.272955   19943 provision.go:86] duration metric: configureAuth took 425.951249ms
	I1107 10:02:52.272969   19943 ubuntu.go:193] setting minikube options for container-runtime
	I1107 10:02:52.273128   19943 config.go:180] Loaded profile config "newest-cni-100155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 10:02:52.273204   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.330050   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:52.330216   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:52.330225   19943 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 10:02:52.447786   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 10:02:52.447798   19943 ubuntu.go:71] root file system type: overlay
	I1107 10:02:52.447943   19943 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 10:02:52.448045   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.505073   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:52.505223   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:52.505273   19943 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 10:02:52.632203   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 10:02:52.632326   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.688756   19943 main.go:134] libmachine: Using SSH client type: native
	I1107 10:02:52.688912   19943 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil>  [] 0s} 127.0.0.1 55032 <nil> <nil>}
	I1107 10:02:52.688925   19943 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 10:02:52.813641   19943 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 10:02:52.813657   19943 machine.go:91] provisioned docker machine in 1.33526649s
	I1107 10:02:52.813683   19943 start.go:300] post-start starting for "newest-cni-100155" (driver="docker")
	I1107 10:02:52.813691   19943 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 10:02:52.813767   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 10:02:52.813828   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:52.870980   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:52.957896   19943 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 10:02:52.961291   19943 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 10:02:52.961306   19943 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 10:02:52.961324   19943 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 10:02:52.961333   19943 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 10:02:52.961342   19943 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
	I1107 10:02:52.961440   19943 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
	I1107 10:02:52.961625   19943 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
	I1107 10:02:52.961842   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 10:02:52.968877   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
	I1107 10:02:52.986078   19943 start.go:303] post-start completed in 172.377025ms
	I1107 10:02:52.986168   19943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 10:02:52.986254   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:53.045189   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:53.127346   19943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 10:02:53.132522   19943 fix.go:57] fixHost completed within 2.230435435s
	I1107 10:02:53.132535   19943 start.go:83] releasing machines lock for "newest-cni-100155", held for 2.230476974s
	I1107 10:02:53.132647   19943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-100155
	I1107 10:02:53.190390   19943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 10:02:53.190410   19943 ssh_runner.go:195] Run: systemctl --version
	I1107 10:02:53.190476   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:53.190482   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:53.249862   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:53.250261   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:02:53.336077   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 10:02:53.391725   19943 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1107 10:02:53.405145   19943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 10:02:53.471552   19943 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 10:02:53.558407   19943 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 10:02:53.568683   19943 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 10:02:53.568755   19943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 10:02:53.578379   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 10:02:53.591196   19943 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 10:02:53.663593   19943 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 10:02:53.730187   19943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 10:02:53.795693   19943 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 10:02:54.057948   19943 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 10:02:54.133024   19943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 10:02:54.206963   19943 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 10:02:54.216843   19943 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 10:02:54.216926   19943 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 10:02:54.221485   19943 start.go:472] Will wait 60s for crictl version
	I1107 10:02:54.221562   19943 ssh_runner.go:195] Run: sudo crictl version
	I1107 10:02:54.252951   19943 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 10:02:54.253057   19943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 10:02:54.286441   19943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 10:02:54.341944   19943 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 10:02:54.342032   19943 cli_runner.go:164] Run: docker exec -t newest-cni-100155 dig +short host.docker.internal
	I1107 10:02:54.467478   19943 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1107 10:02:54.467617   19943 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1107 10:02:54.472246   19943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 10:02:54.483258   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:54.561782   19943 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1107 10:02:54.583696   19943 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 10:02:54.583801   19943 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 10:02:54.609782   19943 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 10:02:54.609803   19943 docker.go:543] Images already preloaded, skipping extraction
	I1107 10:02:54.609904   19943 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 10:02:54.636266   19943 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 10:02:54.636290   19943 cache_images.go:84] Images are preloaded, skipping loading
	I1107 10:02:54.636395   19943 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 10:02:54.723313   19943 cni.go:95] Creating CNI manager for ""
	I1107 10:02:54.723328   19943 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 10:02:54.723341   19943 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1107 10:02:54.723354   19943 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-100155 NodeName:newest-cni-100155 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 10:02:54.723482   19943 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-100155"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 10:02:54.723563   19943 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-100155 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-100155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 10:02:54.723624   19943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 10:02:54.731990   19943 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 10:02:54.732066   19943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 10:02:54.739148   19943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I1107 10:02:54.751909   19943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 10:02:54.765028   19943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1107 10:02:54.777710   19943 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 10:02:54.781564   19943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 10:02:54.791629   19943 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155 for IP: 192.168.67.2
	I1107 10:02:54.791749   19943 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
	I1107 10:02:54.791813   19943 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
	I1107 10:02:54.791900   19943 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/client.key
	I1107 10:02:54.791979   19943 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/apiserver.key.c7fa3a9e
	I1107 10:02:54.792057   19943 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/proxy-client.key
	I1107 10:02:54.792332   19943 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
	W1107 10:02:54.792378   19943 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
	I1107 10:02:54.792391   19943 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 10:02:54.792427   19943 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
	I1107 10:02:54.792465   19943 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
	I1107 10:02:54.792499   19943 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
	I1107 10:02:54.792576   19943 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
	I1107 10:02:54.793102   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 10:02:54.810217   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 10:02:54.827237   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 10:02:54.844539   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/newest-cni-100155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 10:02:54.861851   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 10:02:54.878783   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 10:02:54.895717   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 10:02:54.913502   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 10:02:54.932185   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 10:02:54.951738   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
	I1107 10:02:54.970983   19943 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
	I1107 10:02:54.989157   19943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 10:02:55.002694   19943 ssh_runner.go:195] Run: openssl version
	I1107 10:02:55.009159   19943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
	I1107 10:02:55.018700   19943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
	I1107 10:02:55.022976   19943 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/3267.pem
	I1107 10:02:55.023029   19943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
	I1107 10:02:55.028838   19943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
	I1107 10:02:55.037717   19943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
	I1107 10:02:55.047111   19943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
	I1107 10:02:55.051276   19943 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/32672.pem
	I1107 10:02:55.051326   19943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
	I1107 10:02:55.057169   19943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 10:02:55.065384   19943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 10:02:55.075187   19943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 10:02:55.080258   19943 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 10:02:55.080335   19943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 10:02:55.086195   19943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 10:02:55.095041   19943 kubeadm.go:396] StartCluster: {Name:newest-cni-100155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-100155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 10:02:55.095190   19943 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 10:02:55.118653   19943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 10:02:55.131235   19943 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 10:02:55.131249   19943 kubeadm.go:627] restartCluster start
	I1107 10:02:55.131304   19943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 10:02:55.138269   19943 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:55.138347   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:02:55.197744   19943 kubeconfig.go:135] verify returned: extract IP: "newest-cni-100155" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 10:02:55.197912   19943 kubeconfig.go:146] "newest-cni-100155" context is missing from /Users/jenkins/minikube-integration/15310-2115/kubeconfig - will repair!
	I1107 10:02:55.198238   19943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 10:02:55.199460   19943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 10:02:55.207616   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:55.207698   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:55.216932   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:55.416982   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:55.417059   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:55.425711   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:55.618887   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:55.619065   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:55.630078   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:55.819066   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:55.819298   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:55.830040   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:56.019096   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:56.019252   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:56.029996   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:56.219117   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:56.219315   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:56.229759   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:56.417085   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:56.417234   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:56.426556   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:56.619235   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:56.619386   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:56.630758   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:56.819247   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:56.819358   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:56.829896   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:57.018125   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:57.018289   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:57.028734   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:57.217842   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:57.218059   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:57.228398   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:57.418966   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:57.419162   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:57.430438   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:57.617426   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:57.617506   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:57.628493   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:57.819091   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:57.819259   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:57.829760   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:58.017085   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:58.017153   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:58.026077   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:58.217172   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:58.217244   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:58.226635   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:58.226645   19943 api_server.go:165] Checking apiserver status ...
	I1107 10:02:58.226718   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 10:02:58.235332   19943 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:58.235345   19943 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1107 10:02:58.235353   19943 kubeadm.go:1114] stopping kube-system containers ...
	I1107 10:02:58.235435   19943 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 10:02:58.261746   19943 docker.go:444] Stopping containers: [6edeb06e2a69 fe75a9f8617e b25cec3c0ff9 f8c1739c37cf f46d389f36f2 f8fc82574db9 a63f524d226a 174343f49267 6f519a830319 852174d29987 80348e3d5316 9610c34c8e22 c772f2babe00 dbc387cf3b95 594c522b8e4f e253504b5889]
	I1107 10:02:58.261823   19943 ssh_runner.go:195] Run: docker stop 6edeb06e2a69 fe75a9f8617e b25cec3c0ff9 f8c1739c37cf f46d389f36f2 f8fc82574db9 a63f524d226a 174343f49267 6f519a830319 852174d29987 80348e3d5316 9610c34c8e22 c772f2babe00 dbc387cf3b95 594c522b8e4f e253504b5889
	I1107 10:02:58.286927   19943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 10:02:58.298273   19943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 10:02:58.306578   19943 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  7 18:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  7 18:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  7 18:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  7 18:02 /etc/kubernetes/scheduler.conf
	
	I1107 10:02:58.306658   19943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 10:02:58.316046   19943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 10:02:58.324129   19943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 10:02:58.332120   19943 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:58.332195   19943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 10:02:58.340154   19943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 10:02:58.348119   19943 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 10:02:58.348183   19943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 10:02:58.356923   19943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 10:02:58.365062   19943 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 10:02:58.365073   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 10:02:58.418697   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 10:02:59.644253   19943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.225496734s)
	I1107 10:02:59.644268   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 10:02:59.771968   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 10:02:59.821762   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 10:02:59.932010   19943 api_server.go:51] waiting for apiserver process to appear ...
	I1107 10:02:59.932079   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 10:03:00.446640   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 10:03:00.946338   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 10:03:00.958638   19943 api_server.go:71] duration metric: took 1.026594546s to wait for apiserver process to appear ...
	I1107 10:03:00.958666   19943 api_server.go:87] waiting for apiserver healthz status ...
	I1107 10:03:00.958683   19943 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55031/healthz ...
	I1107 10:03:00.959923   19943 api_server.go:268] stopped: https://127.0.0.1:55031/healthz: Get "https://127.0.0.1:55031/healthz": EOF
	I1107 10:03:01.461675   19943 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55031/healthz ...
	I1107 10:03:04.305776   19943 api_server.go:278] https://127.0.0.1:55031/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 10:03:04.305790   19943 api_server.go:102] status: https://127.0.0.1:55031/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 10:03:04.460643   19943 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55031/healthz ...
	I1107 10:03:04.467581   19943 api_server.go:278] https://127.0.0.1:55031/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 10:03:04.467593   19943 api_server.go:102] status: https://127.0.0.1:55031/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 10:03:04.960176   19943 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55031/healthz ...
	I1107 10:03:04.966442   19943 api_server.go:278] https://127.0.0.1:55031/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 10:03:04.966462   19943 api_server.go:102] status: https://127.0.0.1:55031/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 10:03:05.461681   19943 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55031/healthz ...
	I1107 10:03:05.469850   19943 api_server.go:278] https://127.0.0.1:55031/healthz returned 200:
	ok
	I1107 10:03:05.476291   19943 api_server.go:140] control plane version: v1.25.3
	I1107 10:03:05.476301   19943 api_server.go:130] duration metric: took 4.517495706s to wait for apiserver health ...
	I1107 10:03:05.476306   19943 cni.go:95] Creating CNI manager for ""
	I1107 10:03:05.476313   19943 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 10:03:05.476329   19943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 10:03:05.487450   19943 system_pods.go:59] 8 kube-system pods found
	I1107 10:03:05.487470   19943 system_pods.go:61] "coredns-565d847f94-fxj2f" [68e0666c-157c-4597-bd0d-8de83119edea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 10:03:05.487475   19943 system_pods.go:61] "etcd-newest-cni-100155" [f7185ca5-ba73-43e3-a740-d51727a60b15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 10:03:05.487482   19943 system_pods.go:61] "kube-apiserver-newest-cni-100155" [a93d83ac-a6a0-49cf-b5a2-503762de1ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 10:03:05.487487   19943 system_pods.go:61] "kube-controller-manager-newest-cni-100155" [163ce54d-e499-4deb-a8a4-fdf948aea580] Running
	I1107 10:03:05.487491   19943 system_pods.go:61] "kube-proxy-km8km" [39c1e209-90b6-42bb-9143-93f98809dc6c] Running
	I1107 10:03:05.487496   19943 system_pods.go:61] "kube-scheduler-newest-cni-100155" [0343eaa4-d3b3-4229-b862-85c42520b12b] Running
	I1107 10:03:05.487504   19943 system_pods.go:61] "metrics-server-5c8fd5cf8-rg28q" [ff57ead6-f28e-460b-81ac-a80c264638af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 10:03:05.487509   19943 system_pods.go:61] "storage-provisioner" [046e6ffd-78c8-4ea2-9e81-b5af4f6dea0f] Running
	I1107 10:03:05.487513   19943 system_pods.go:74] duration metric: took 11.179486ms to wait for pod list to return data ...
	I1107 10:03:05.487518   19943 node_conditions.go:102] verifying NodePressure condition ...
	I1107 10:03:05.490284   19943 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 10:03:05.490298   19943 node_conditions.go:123] node cpu capacity is 6
	I1107 10:03:05.490330   19943 node_conditions.go:105] duration metric: took 2.806465ms to run NodePressure ...
	I1107 10:03:05.490349   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 10:03:05.644204   19943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 10:03:05.652338   19943 ops.go:34] apiserver oom_adj: -16
	I1107 10:03:05.652348   19943 kubeadm.go:631] restartCluster took 10.520784986s
	I1107 10:03:05.652355   19943 kubeadm.go:398] StartCluster complete in 10.557013183s
	I1107 10:03:05.652371   19943 settings.go:142] acquiring lock: {Name:mkacd69bfe5f4d7bab8b044c0ff487fe5c3f0cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 10:03:05.652473   19943 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 10:03:05.653070   19943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/kubeconfig: {Name:mk892d56d979702eee7d784abc692970bda7bca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 10:03:05.656036   19943 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-100155" rescaled to 1
	I1107 10:03:05.656074   19943 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 10:03:05.656088   19943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 10:03:05.656107   19943 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1107 10:03:05.679998   19943 out.go:177] * Verifying Kubernetes components...
	I1107 10:03:05.656242   19943 config.go:180] Loaded profile config "newest-cni-100155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 10:03:05.680077   19943 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-100155"
	I1107 10:03:05.680088   19943 addons.go:65] Setting dashboard=true in profile "newest-cni-100155"
	I1107 10:03:05.680100   19943 addons.go:65] Setting metrics-server=true in profile "newest-cni-100155"
	I1107 10:03:05.680122   19943 addons.go:65] Setting default-storageclass=true in profile "newest-cni-100155"
	I1107 10:03:05.754154   19943 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-100155"
	I1107 10:03:05.754158   19943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-100155"
	W1107 10:03:05.754169   19943 addons.go:236] addon storage-provisioner should already be in state true
	I1107 10:03:05.754165   19943 addons.go:227] Setting addon dashboard=true in "newest-cni-100155"
	I1107 10:03:05.754190   19943 addons.go:227] Setting addon metrics-server=true in "newest-cni-100155"
	W1107 10:03:05.754201   19943 addons.go:236] addon metrics-server should already be in state true
	I1107 10:03:05.754228   19943 host.go:66] Checking if "newest-cni-100155" exists ...
	W1107 10:03:05.754195   19943 addons.go:236] addon dashboard should already be in state true
	I1107 10:03:05.754209   19943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 10:03:05.754259   19943 host.go:66] Checking if "newest-cni-100155" exists ...
	I1107 10:03:05.754291   19943 host.go:66] Checking if "newest-cni-100155" exists ...
	I1107 10:03:05.755738   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:03:05.755967   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:03:05.759285   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:03:05.759816   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:03:05.766177   19943 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 10:03:05.774564   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:03:05.857967   19943 addons.go:227] Setting addon default-storageclass=true in "newest-cni-100155"
	I1107 10:03:05.875964   19943 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W1107 10:03:05.934099   19943 addons.go:236] addon default-storageclass should already be in state true
	I1107 10:03:05.897086   19943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 10:03:05.934054   19943 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1107 10:03:05.934172   19943 host.go:66] Checking if "newest-cni-100155" exists ...
	I1107 10:03:05.971041   19943 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 10:03:06.009258   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 10:03:06.009488   19943 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 10:03:06.047061   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 10:03:06.010836   19943 cli_runner.go:164] Run: docker container inspect newest-cni-100155 --format={{.State.Status}}
	I1107 10:03:06.067866   19943 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1107 10:03:06.047191   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:03:06.047209   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:03:06.063904   19943 api_server.go:51] waiting for apiserver process to appear ...
	I1107 10:03:06.089215   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1107 10:03:06.089236   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1107 10:03:06.089319   19943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 10:03:06.089363   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:03:06.110646   19943 api_server.go:71] duration metric: took 454.528597ms to wait for apiserver process to appear ...
	I1107 10:03:06.110677   19943 api_server.go:87] waiting for apiserver healthz status ...
	I1107 10:03:06.110697   19943 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55031/healthz ...
	I1107 10:03:06.120727   19943 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 10:03:06.120744   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 10:03:06.120866   19943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-100155
	I1107 10:03:06.122152   19943 api_server.go:278] https://127.0.0.1:55031/healthz returned 200:
	ok
	I1107 10:03:06.125815   19943 api_server.go:140] control plane version: v1.25.3
	I1107 10:03:06.125854   19943 api_server.go:130] duration metric: took 15.167514ms to wait for apiserver health ...
	I1107 10:03:06.125866   19943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 10:03:06.135559   19943 system_pods.go:59] 8 kube-system pods found
	I1107 10:03:06.135588   19943 system_pods.go:61] "coredns-565d847f94-fxj2f" [68e0666c-157c-4597-bd0d-8de83119edea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 10:03:06.135606   19943 system_pods.go:61] "etcd-newest-cni-100155" [f7185ca5-ba73-43e3-a740-d51727a60b15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 10:03:06.135619   19943 system_pods.go:61] "kube-apiserver-newest-cni-100155" [a93d83ac-a6a0-49cf-b5a2-503762de1ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 10:03:06.135636   19943 system_pods.go:61] "kube-controller-manager-newest-cni-100155" [163ce54d-e499-4deb-a8a4-fdf948aea580] Running
	I1107 10:03:06.135644   19943 system_pods.go:61] "kube-proxy-km8km" [39c1e209-90b6-42bb-9143-93f98809dc6c] Running
	I1107 10:03:06.135657   19943 system_pods.go:61] "kube-scheduler-newest-cni-100155" [0343eaa4-d3b3-4229-b862-85c42520b12b] Running
	I1107 10:03:06.135666   19943 system_pods.go:61] "metrics-server-5c8fd5cf8-rg28q" [ff57ead6-f28e-460b-81ac-a80c264638af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 10:03:06.135673   19943 system_pods.go:61] "storage-provisioner" [046e6ffd-78c8-4ea2-9e81-b5af4f6dea0f] Running
	I1107 10:03:06.135682   19943 system_pods.go:74] duration metric: took 9.809202ms to wait for pod list to return data ...
	I1107 10:03:06.135690   19943 default_sa.go:34] waiting for default service account to be created ...
	I1107 10:03:06.139709   19943 default_sa.go:45] found service account: "default"
	I1107 10:03:06.139727   19943 default_sa.go:55] duration metric: took 4.029514ms for default service account to be created ...
	I1107 10:03:06.139738   19943 kubeadm.go:573] duration metric: took 483.632484ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1107 10:03:06.139762   19943 node_conditions.go:102] verifying NodePressure condition ...
	I1107 10:03:06.144108   19943 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1107 10:03:06.144139   19943 node_conditions.go:123] node cpu capacity is 6
	I1107 10:03:06.144173   19943 node_conditions.go:105] duration metric: took 4.402228ms to run NodePressure ...
	I1107 10:03:06.144188   19943 start.go:217] waiting for startup goroutines ...
	I1107 10:03:06.152015   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:03:06.188215   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:03:06.189132   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:03:06.201149   19943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55032 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/newest-cni-100155/id_rsa Username:docker}
	I1107 10:03:06.341725   19943 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 10:03:06.341739   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1107 10:03:06.345451   19943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 10:03:06.431489   19943 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 10:03:06.431505   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 10:03:06.517662   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1107 10:03:06.517689   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1107 10:03:06.524202   19943 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 10:03:06.524216   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 10:03:06.525727   19943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 10:03:06.540302   19943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 10:03:06.623672   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1107 10:03:06.623689   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1107 10:03:06.731074   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1107 10:03:06.731089   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1107 10:03:06.827871   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1107 10:03:06.827901   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1107 10:03:06.853802   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1107 10:03:06.853830   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1107 10:03:07.017702   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1107 10:03:07.017723   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1107 10:03:07.042319   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1107 10:03:07.042336   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1107 10:03:07.124566   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1107 10:03:07.124582   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1107 10:03:07.142512   19943 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 10:03:07.142530   19943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1107 10:03:07.217297   19943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1107 10:03:07.970505   19943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.624979767s)
	I1107 10:03:07.970523   19943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.444737739s)
	I1107 10:03:07.970578   19943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.430211204s)
	I1107 10:03:07.970594   19943 addons.go:457] Verifying addon metrics-server=true in "newest-cni-100155"
	I1107 10:03:08.164018   19943 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-100155 addons enable metrics-server	
	
	
	I1107 10:03:08.223071   19943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1107 10:03:08.283105   19943 addons.go:488] enableAddons completed in 2.626930022s
	I1107 10:03:08.283510   19943 ssh_runner.go:195] Run: rm -f paused
	I1107 10:03:08.358058   19943 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1107 10:03:08.395431   19943 out.go:177] * Done! kubectl is now configured to use "newest-cni-100155" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:45:15 UTC, end at Mon 2022-11-07 18:12:10 UTC. --
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.642163935Z" level=info msg="Processing signal 'terminated'"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643228381Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643875758Z" level=info msg="Daemon shutdown complete"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[130]: time="2022-11-07T17:45:17.643949836Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: docker.service: Succeeded.
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Stopped Docker Application Container Engine.
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Starting Docker Application Container Engine...
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.695361650Z" level=info msg="Starting up"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696807685Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696840105Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696863638Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.696873330Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698141384Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698178354Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698194160Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.698202660Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.701972523Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.705958338Z" level=info msg="Loading containers: start."
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.781668503Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.812176137Z" level=info msg="Loading containers: done."
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.822689120Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.822804604Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:45:17 old-k8s-version-093929 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.843870638Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:45:17 old-k8s-version-093929 dockerd[426]: time="2022-11-07T17:45:17.850307657Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-11-07T18:12:12Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:12:12 up  1:41,  0 users,  load average: 0.32, 0.51, 0.76
	Linux old-k8s-version-093929 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:45:15 UTC, end at Mon 2022-11-07 18:12:12 UTC. --
	Nov 07 18:12:10 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 18:12:11 old-k8s-version-093929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1665.
	Nov 07 18:12:11 old-k8s-version-093929 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 18:12:11 old-k8s-version-093929 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 18:12:11 old-k8s-version-093929 kubelet[34248]: I1107 18:12:11.540444   34248 server.go:410] Version: v1.16.0
	Nov 07 18:12:11 old-k8s-version-093929 kubelet[34248]: I1107 18:12:11.540675   34248 plugins.go:100] No cloud provider specified.
	Nov 07 18:12:11 old-k8s-version-093929 kubelet[34248]: I1107 18:12:11.540687   34248 server.go:773] Client rotation is on, will bootstrap in background
	Nov 07 18:12:11 old-k8s-version-093929 kubelet[34248]: I1107 18:12:11.542683   34248 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 07 18:12:11 old-k8s-version-093929 kubelet[34248]: W1107 18:12:11.543396   34248 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 07 18:12:11 old-k8s-version-093929 kubelet[34248]: W1107 18:12:11.543470   34248 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 07 18:12:11 old-k8s-version-093929 kubelet[34248]: F1107 18:12:11.543491   34248 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 07 18:12:11 old-k8s-version-093929 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 07 18:12:11 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 07 18:12:12 old-k8s-version-093929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1666.
	Nov 07 18:12:12 old-k8s-version-093929 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 07 18:12:12 old-k8s-version-093929 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 07 18:12:12 old-k8s-version-093929 kubelet[34260]: I1107 18:12:12.282963   34260 server.go:410] Version: v1.16.0
	Nov 07 18:12:12 old-k8s-version-093929 kubelet[34260]: I1107 18:12:12.283156   34260 plugins.go:100] No cloud provider specified.
	Nov 07 18:12:12 old-k8s-version-093929 kubelet[34260]: I1107 18:12:12.283168   34260 server.go:773] Client rotation is on, will bootstrap in background
	Nov 07 18:12:12 old-k8s-version-093929 kubelet[34260]: I1107 18:12:12.284930   34260 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 07 18:12:12 old-k8s-version-093929 kubelet[34260]: W1107 18:12:12.285561   34260 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 07 18:12:12 old-k8s-version-093929 kubelet[34260]: W1107 18:12:12.285656   34260 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 07 18:12:12 old-k8s-version-093929 kubelet[34260]: F1107 18:12:12.285681   34260 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 07 18:12:12 old-k8s-version-093929 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 07 18:12:12 old-k8s-version-093929 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 10:12:12.388594   20792 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 2 (389.794034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-093929" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.67s)

                                                
                                    

Test pass (261/295)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.46
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.25.3/json-events 6.64
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 15.71
19 TestBinaryMirror 1.71
20 TestOffline 46.87
22 TestAddons/Setup 151.91
26 TestAddons/parallel/MetricsServer 5.79
27 TestAddons/parallel/HelmTiller 11.51
29 TestAddons/parallel/CSI 40.17
30 TestAddons/parallel/Headlamp 11.39
31 TestAddons/parallel/CloudSpanner 5.58
33 TestAddons/serial/GCPAuth 16.15
34 TestAddons/StoppedEnableDisable 12.96
35 TestCertOptions 32.33
36 TestCertExpiration 240.77
37 TestDockerFlags 31.62
38 TestForceSystemdFlag 30.7
39 TestForceSystemdEnv 31.78
41 TestHyperKitDriverInstallOrUpdate 7.87
44 TestErrorSpam/setup 27.44
45 TestErrorSpam/start 2.45
46 TestErrorSpam/status 1.22
47 TestErrorSpam/pause 1.86
48 TestErrorSpam/unpause 1.79
49 TestErrorSpam/stop 12.97
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 51.15
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 68.59
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.08
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.93
61 TestFunctional/serial/CacheCmd/cache/add_local 1.82
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
63 TestFunctional/serial/CacheCmd/cache/list 0.08
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.38
66 TestFunctional/serial/CacheCmd/cache/delete 0.16
67 TestFunctional/serial/MinikubeKubectlCmd 0.49
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
69 TestFunctional/serial/ExtraConfig 44.58
70 TestFunctional/serial/ComponentHealth 0.06
71 TestFunctional/serial/LogsCmd 2.91
72 TestFunctional/serial/LogsFileCmd 3.04
74 TestFunctional/parallel/ConfigCmd 0.48
75 TestFunctional/parallel/DashboardCmd 13.55
76 TestFunctional/parallel/DryRun 1.58
77 TestFunctional/parallel/InternationalLanguage 0.87
78 TestFunctional/parallel/StatusCmd 1.31
81 TestFunctional/parallel/ServiceCmd 18.89
83 TestFunctional/parallel/AddonsCmd 0.26
84 TestFunctional/parallel/PersistentVolumeClaim 27.57
86 TestFunctional/parallel/SSHCmd 0.79
87 TestFunctional/parallel/CpCmd 2.14
88 TestFunctional/parallel/MySQL 29.74
89 TestFunctional/parallel/FileSync 0.44
90 TestFunctional/parallel/CertSync 2.58
94 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
98 TestFunctional/parallel/License 0.48
99 TestFunctional/parallel/Version/short 0.12
100 TestFunctional/parallel/Version/components 0.75
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.39
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
105 TestFunctional/parallel/ImageCommands/ImageBuild 3.86
106 TestFunctional/parallel/ImageCommands/Setup 2.51
107 TestFunctional/parallel/DockerEnv/bash 1.83
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.41
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.46
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.4
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.36
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.66
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.7
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.2
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.13
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.17
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
129 TestFunctional/parallel/ProfileCmd/profile_list 0.49
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
131 TestFunctional/parallel/MountCmd/any-port 9.7
132 TestFunctional/parallel/MountCmd/specific-port 2.34
133 TestFunctional/delete_addon-resizer_images 0.15
134 TestFunctional/delete_my-image_image 0.06
135 TestFunctional/delete_minikube_cached_images 0.06
145 TestJSONOutput/start/Command 46.01
146 TestJSONOutput/start/Audit 0
148 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/pause/Command 0.66
152 TestJSONOutput/pause/Audit 0
154 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/unpause/Command 0.64
158 TestJSONOutput/unpause/Audit 0
160 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/stop/Command 12.25
164 TestJSONOutput/stop/Audit 0
166 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
168 TestErrorJSONOutput 0.73
170 TestKicCustomNetwork/create_custom_network 29.25
171 TestKicCustomNetwork/use_default_bridge_network 29.57
172 TestKicExistingNetwork 30.2
173 TestKicCustomSubnet 29.28
174 TestMainNoArgs 0.08
175 TestMinikubeProfile 62.88
178 TestMountStart/serial/StartWithMountFirst 7.37
179 TestMountStart/serial/VerifyMountFirst 0.4
180 TestMountStart/serial/StartWithMountSecond 7.14
181 TestMountStart/serial/VerifyMountSecond 0.4
182 TestMountStart/serial/DeleteFirst 2.13
183 TestMountStart/serial/VerifyMountPostDelete 0.39
184 TestMountStart/serial/Stop 1.54
185 TestMountStart/serial/RestartStopped 5.31
186 TestMountStart/serial/VerifyMountPostStop 0.4
189 TestMultiNode/serial/FreshStart2Nodes 85.36
190 TestMultiNode/serial/DeployApp2Nodes 5.66
191 TestMultiNode/serial/PingHostFrom2Pods 0.88
192 TestMultiNode/serial/AddNode 26.31
193 TestMultiNode/serial/ProfileList 0.43
194 TestMultiNode/serial/CopyFile 14.48
195 TestMultiNode/serial/StopNode 13.76
196 TestMultiNode/serial/StartAfterStop 19.22
197 TestMultiNode/serial/RestartKeepsNodes 112.87
198 TestMultiNode/serial/DeleteNode 16.85
199 TestMultiNode/serial/StopMultiNode 24.88
201 TestMultiNode/serial/ValidateNameConflict 30.75
205 TestPreload 142.43
207 TestScheduledStopUnix 101.19
208 TestSkaffold 62.45
210 TestInsufficientStorage 12.8
226 TestStoppedBinaryUpgrade/Setup 0.7
228 TestStoppedBinaryUpgrade/MinikubeLogs 3.59
237 TestPause/serial/Start 43.41
238 TestPause/serial/SecondStartNoReconfiguration 51.12
239 TestPause/serial/Pause 0.71
240 TestPause/serial/VerifyStatus 0.41
241 TestPause/serial/Unpause 0.71
242 TestPause/serial/PauseAgain 0.79
243 TestPause/serial/DeletePaused 2.6
244 TestPause/serial/VerifyDeletedResources 0.56
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
247 TestNoKubernetes/serial/StartWithK8s 28.3
248 TestNoKubernetes/serial/StartWithStopK8s 17.58
249 TestNoKubernetes/serial/Start 6.64
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
251 TestNoKubernetes/serial/ProfileList 16.11
252 TestNoKubernetes/serial/Stop 1.61
253 TestNoKubernetes/serial/StartNoArgs 4.13
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.26
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.46
257 TestNetworkPlugins/group/auto/Start 44.28
258 TestNetworkPlugins/group/kindnet/Start 59.68
259 TestNetworkPlugins/group/auto/KubeletFlags 0.41
260 TestNetworkPlugins/group/auto/NetCatPod 12.19
261 TestNetworkPlugins/group/auto/DNS 0.14
262 TestNetworkPlugins/group/auto/Localhost 0.13
263 TestNetworkPlugins/group/auto/HairPin 5.12
264 TestNetworkPlugins/group/cilium/Start 74.11
265 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
266 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
267 TestNetworkPlugins/group/kindnet/NetCatPod 11.21
268 TestNetworkPlugins/group/kindnet/DNS 0.12
269 TestNetworkPlugins/group/kindnet/Localhost 0.14
270 TestNetworkPlugins/group/kindnet/HairPin 0.11
271 TestNetworkPlugins/group/calico/Start 324.57
272 TestNetworkPlugins/group/cilium/ControllerPod 5.02
273 TestNetworkPlugins/group/cilium/KubeletFlags 0.41
274 TestNetworkPlugins/group/cilium/NetCatPod 14.68
275 TestNetworkPlugins/group/cilium/DNS 0.14
276 TestNetworkPlugins/group/cilium/Localhost 0.12
277 TestNetworkPlugins/group/cilium/HairPin 0.14
278 TestNetworkPlugins/group/false/Start 80.65
279 TestNetworkPlugins/group/false/KubeletFlags 0.4
280 TestNetworkPlugins/group/false/NetCatPod 13.25
281 TestNetworkPlugins/group/false/DNS 0.12
282 TestNetworkPlugins/group/false/Localhost 0.11
283 TestNetworkPlugins/group/false/HairPin 5.11
284 TestNetworkPlugins/group/bridge/Start 46.51
285 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
286 TestNetworkPlugins/group/bridge/NetCatPod 12.22
287 TestNetworkPlugins/group/bridge/DNS 0.12
288 TestNetworkPlugins/group/bridge/Localhost 0.11
289 TestNetworkPlugins/group/bridge/HairPin 0.11
290 TestNetworkPlugins/group/enable-default-cni/Start 91.44
291 TestNetworkPlugins/group/calico/ControllerPod 5.02
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.17
294 TestNetworkPlugins/group/calico/KubeletFlags 0.45
295 TestNetworkPlugins/group/calico/NetCatPod 14.31
296 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
297 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
298 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
299 TestNetworkPlugins/group/calico/DNS 0.12
300 TestNetworkPlugins/group/kubenet/Start 46.39
301 TestNetworkPlugins/group/calico/Localhost 0.11
302 TestNetworkPlugins/group/calico/HairPin 0.11
305 TestNetworkPlugins/group/kubenet/KubeletFlags 0.4
306 TestNetworkPlugins/group/kubenet/NetCatPod 13.19
307 TestNetworkPlugins/group/kubenet/DNS 0.11
308 TestNetworkPlugins/group/kubenet/Localhost 0.12
311 TestStartStop/group/no-preload/serial/FirstStart 88.5
312 TestStartStop/group/no-preload/serial/DeployApp 10.26
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.77
314 TestStartStop/group/no-preload/serial/Stop 12.44
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.39
316 TestStartStop/group/no-preload/serial/SecondStart 299.92
319 TestStartStop/group/old-k8s-version/serial/Stop 1.59
320 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.02
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.42
325 TestStartStop/group/no-preload/serial/Pause 3.28
327 TestStartStop/group/embed-certs/serial/FirstStart 45.32
328 TestStartStop/group/embed-certs/serial/DeployApp 8.31
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
330 TestStartStop/group/embed-certs/serial/Stop 12.4
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.38
332 TestStartStop/group/embed-certs/serial/SecondStart 298.35
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.01
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
337 TestStartStop/group/embed-certs/serial/Pause 3.2
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.9
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.77
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.4
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.39
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 301.8
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.02
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.24
350 TestStartStop/group/newest-cni/serial/FirstStart 41.03
351 TestStartStop/group/newest-cni/serial/DeployApp 0
352 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
353 TestStartStop/group/newest-cni/serial/Stop 12.39
354 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.39
355 TestStartStop/group/newest-cni/serial/SecondStart 18.89
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.49
360 TestStartStop/group/newest-cni/serial/Pause 3.1
x
+
TestDownloadOnly/v1.16.0/json-events (12.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-084452 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-084452 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (12.455679155s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-084452
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-084452: exit status 85 (294.546218ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-084452 | jenkins | v1.28.0 | 07 Nov 22 08:44 PST |          |
	|         | -p download-only-084452        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 08:44:52
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 08:44:52.625253    3269 out.go:296] Setting OutFile to fd 1 ...
	I1107 08:44:52.625423    3269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:44:52.625429    3269 out.go:309] Setting ErrFile to fd 2...
	I1107 08:44:52.625433    3269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:44:52.625548    3269 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	W1107 08:44:52.625648    3269 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15310-2115/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15310-2115/.minikube/config/config.json: no such file or directory
	I1107 08:44:52.626398    3269 out.go:303] Setting JSON to true
	I1107 08:44:52.644904    3269 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":867,"bootTime":1667838625,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 08:44:52.644997    3269 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 08:44:52.668025    3269 out.go:97] [download-only-084452] minikube v1.28.0 on Darwin 13.0
	I1107 08:44:52.668241    3269 notify.go:220] Checking for updates...
	W1107 08:44:52.668263    3269 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 08:44:52.688524    3269 out.go:169] MINIKUBE_LOCATION=15310
	I1107 08:44:52.730812    3269 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 08:44:52.773630    3269 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 08:44:52.794727    3269 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 08:44:52.815830    3269 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	W1107 08:44:52.859763    3269 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 08:44:52.860156    3269 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 08:44:52.919621    3269 docker.go:137] docker version: linux-20.10.20
	I1107 08:44:52.919767    3269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:44:53.065764    3269 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-07 16:44:52.989452993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:44:53.088721    3269 out.go:97] Using the docker driver based on user configuration
	I1107 08:44:53.088794    3269 start.go:282] selected driver: docker
	I1107 08:44:53.088809    3269 start.go:808] validating driver "docker" against <nil>
	I1107 08:44:53.089067    3269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:44:53.232731    3269 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-07 16:44:53.159290707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:44:53.232848    3269 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 08:44:53.236783    3269 start_flags.go:384] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I1107 08:44:53.236920    3269 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 08:44:53.258226    3269 out.go:169] Using Docker Desktop driver with root privileges
	I1107 08:44:53.279148    3269 cni.go:95] Creating CNI manager for ""
	I1107 08:44:53.279242    3269 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 08:44:53.279261    3269 start_flags.go:317] config:
	{Name:download-only-084452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-084452 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 08:44:53.301035    3269 out.go:97] Starting control plane node download-only-084452 in cluster download-only-084452
	I1107 08:44:53.301157    3269 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 08:44:53.323154    3269 out.go:97] Pulling base image ...
	I1107 08:44:53.323273    3269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 08:44:53.323359    3269 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 08:44:53.376154    3269 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 08:44:53.376174    3269 cache.go:57] Caching tarball of preloaded images
	I1107 08:44:53.376405    3269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 08:44:53.398280    3269 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 08:44:53.398327    3269 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 08:44:53.401755    3269 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 08:44:53.401971    3269 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1107 08:44:53.402152    3269 image.go:120] Writing gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 08:44:53.477797    3269 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 08:44:58.033468    3269 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 08:44:58.033725    3269 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 08:44:58.580127    3269 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1107 08:44:58.580332    3269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/download-only-084452/config.json ...
	I1107 08:44:58.580361    3269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/download-only-084452/config.json: {Name:mk83d3e58567e10222a8122b05955d7db9210e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 08:44:58.580623    3269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 08:44:58.580900    3269 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-084452"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (6.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-084452 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-084452 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker : (6.644163545s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (6.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-084452
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-084452: exit status 85 (293.528597ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-084452 | jenkins | v1.28.0 | 07 Nov 22 08:44 PST |          |
	|         | -p download-only-084452        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-084452 | jenkins | v1.28.0 | 07 Nov 22 08:45 PST |          |
	|         | -p download-only-084452        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 08:45:05
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 08:45:05.378220    3309 out.go:296] Setting OutFile to fd 1 ...
	I1107 08:45:05.378474    3309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:45:05.378480    3309 out.go:309] Setting ErrFile to fd 2...
	I1107 08:45:05.378484    3309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:45:05.378612    3309 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	W1107 08:45:05.378703    3309 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15310-2115/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15310-2115/.minikube/config/config.json: no such file or directory
	I1107 08:45:05.379068    3309 out.go:303] Setting JSON to true
	I1107 08:45:05.398165    3309 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":880,"bootTime":1667838625,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 08:45:05.398272    3309 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 08:45:05.420640    3309 out.go:97] [download-only-084452] minikube v1.28.0 on Darwin 13.0
	I1107 08:45:05.420811    3309 notify.go:220] Checking for updates...
	I1107 08:45:05.442384    3309 out.go:169] MINIKUBE_LOCATION=15310
	I1107 08:45:05.464350    3309 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 08:45:05.486680    3309 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 08:45:05.508724    3309 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 08:45:05.530448    3309 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	W1107 08:45:05.573339    3309 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 08:45:05.574064    3309 config.go:180] Loaded profile config "download-only-084452": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1107 08:45:05.574149    3309 start.go:716] api.Load failed for download-only-084452: filestore "download-only-084452": Docker machine "download-only-084452" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 08:45:05.574239    3309 driver.go:365] Setting default libvirt URI to qemu:///system
	W1107 08:45:05.574281    3309 start.go:716] api.Load failed for download-only-084452: filestore "download-only-084452": Docker machine "download-only-084452" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 08:45:05.633972    3309 docker.go:137] docker version: linux-20.10.20
	I1107 08:45:05.634107    3309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:45:05.780536    3309 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-07 16:45:05.69211802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:45:05.801667    3309 out.go:97] Using the docker driver based on existing profile
	I1107 08:45:05.801699    3309 start.go:282] selected driver: docker
	I1107 08:45:05.801712    3309 start.go:808] validating driver "docker" against &{Name:download-only-084452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-084452 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 08:45:05.802044    3309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:45:05.947541    3309 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-07 16:45:05.863247138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:45:05.949945    3309 cni.go:95] Creating CNI manager for ""
	I1107 08:45:05.949962    3309 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 08:45:05.949977    3309 start_flags.go:317] config:
	{Name:download-only-084452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-084452 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 08:45:05.971705    3309 out.go:97] Starting control plane node download-only-084452 in cluster download-only-084452
	I1107 08:45:05.971819    3309 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 08:45:05.993561    3309 out.go:97] Pulling base image ...
	I1107 08:45:05.993618    3309 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 08:45:05.993781    3309 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 08:45:06.046550    3309 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 08:45:06.046569    3309 cache.go:57] Caching tarball of preloaded images
	I1107 08:45:06.046784    3309 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 08:45:06.047047    3309 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 08:45:06.047138    3309 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1107 08:45:06.047161    3309 image.go:63] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory, skipping pull
	I1107 08:45:06.047166    3309 image.go:104] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in cache, skipping pull
	I1107 08:45:06.047174    3309 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 as a tarball
	I1107 08:45:06.068587    3309 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1107 08:45:06.068622    3309 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1107 08:45:06.148101    3309 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-084452"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-084452
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (15.71s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-084513 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-084513 --force --alsologtostderr --driver=docker : (14.638811718s)
helpers_test.go:175: Cleaning up "download-docker-084513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-084513
--- PASS: TestDownloadOnlyKic (15.71s)

                                                
                                    
x
+
TestBinaryMirror (1.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-084529 --alsologtostderr --binary-mirror http://127.0.0.1:49430 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-084529 --alsologtostderr --binary-mirror http://127.0.0.1:49430 --driver=docker : (1.101117505s)
helpers_test.go:175: Cleaning up "binary-mirror-084529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-084529
--- PASS: TestBinaryMirror (1.71s)

                                                
                                    
x
+
TestOffline (46.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-092103 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-092103 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (44.154888378s)
helpers_test.go:175: Cleaning up "offline-docker-092103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-092103
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-092103: (2.711581306s)
--- PASS: TestOffline (46.87s)

                                                
                                    
x
+
TestAddons/Setup (151.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-084531 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-084531 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.913581443s)
--- PASS: TestAddons/Setup (151.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 2.050016ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-d7mgw" [8e0417a3-c140-4871-ba94-d4ce9891cc4a] Running
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010845757s
addons_test.go:368: (dbg) Run:  kubectl --context addons-084531 top pods -n kube-system
addons_test.go:385: (dbg) Run:  out/minikube-darwin-amd64 -p addons-084531 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.51s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 2.538664ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-b8jmk" [4930a48b-08b5-40c1-9685-2ef5c1677842] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008574588s
addons_test.go:426: (dbg) Run:  kubectl --context addons-084531 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-084531 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.046060897s)
addons_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p addons-084531 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.51s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 4.155382ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-084531 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-084531 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-084531 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [d0de4073-40a0-4566-9cf1-ddf5eeb85dcd] Pending
helpers_test.go:342: "task-pv-pod" [d0de4073-40a0-4566-9cf1-ddf5eeb85dcd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [d0de4073-40a0-4566-9cf1-ddf5eeb85dcd] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.00841702s
addons_test.go:537: (dbg) Run:  kubectl --context addons-084531 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-084531 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-084531 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-084531 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:547: (dbg) Done: kubectl --context addons-084531 delete pod task-pv-pod: (1.254100842s)
addons_test.go:553: (dbg) Run:  kubectl --context addons-084531 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-084531 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-084531 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-084531 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [9307d690-3a06-4343-be63-ef72863e7015] Pending
helpers_test.go:342: "task-pv-pod-restore" [9307d690-3a06-4343-be63-ef72863e7015] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [9307d690-3a06-4343-be63-ef72863e7015] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.008813922s
addons_test.go:579: (dbg) Run:  kubectl --context addons-084531 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Run:  kubectl --context addons-084531 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-084531 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-darwin-amd64 -p addons-084531 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-darwin-amd64 -p addons-084531 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.766837283s)
addons_test.go:595: (dbg) Run:  out/minikube-darwin-amd64 -p addons-084531 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-084531 --alsologtostderr -v=1
addons_test.go:738: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-084531 --alsologtostderr -v=1: (1.38220269s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-t6s6z" [6413afad-a7fc-4406-8c72-5ded99b71822] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-t6s6z" [6413afad-a7fc-4406-8c72-5ded99b71822] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.008697965s
--- PASS: TestAddons/parallel/Headlamp (11.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-5zqhg" [a143147a-0091-4984-9a27-89727c7ea56a] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008509762s
addons_test.go:762: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-084531
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (16.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-084531 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-084531 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d484ccbc-1ac8-453d-bf44-05b2e6671023] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d484ccbc-1ac8-453d-bf44-05b2e6671023] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.007370521s
addons_test.go:625: (dbg) Run:  kubectl --context addons-084531 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-084531 describe sa gcp-auth-test
addons_test.go:651: (dbg) Run:  kubectl --context addons-084531 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:675: (dbg) Run:  kubectl --context addons-084531 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-darwin-amd64 -p addons-084531 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-darwin-amd64 -p addons-084531 addons disable gcp-auth --alsologtostderr -v=1: (6.586897816s)
--- PASS: TestAddons/serial/GCPAuth (16.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.96s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-084531
addons_test.go:135: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-084531: (12.505939555s)
addons_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-084531
addons_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-084531
--- PASS: TestAddons/StoppedEnableDisable (12.96s)

                                                
                                    
x
+
TestCertOptions (32.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-093136 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-093136 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (28.750415372s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-093136 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-093136 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-093136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-093136
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-093136: (2.666790594s)
--- PASS: TestCertOptions (32.33s)

                                                
                                    
x
+
TestCertExpiration (240.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-092821 --memory=2048 --cert-expiration=3m --driver=docker 
E1107 09:28:22.121813    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:28:30.927622    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-092821 --memory=2048 --cert-expiration=3m --driver=docker : (27.887623023s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-092821 --memory=2048 --cert-expiration=8760h --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-092821 --memory=2048 --cert-expiration=8760h --driver=docker : (30.225341981s)
helpers_test.go:175: Cleaning up "cert-expiration-092821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-092821
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-092821: (2.650653515s)
--- PASS: TestCertExpiration (240.77s)

                                                
                                    
x
+
TestDockerFlags (31.62s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-093104 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E1107 09:31:05.967545    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-093104 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (28.158105049s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-093104 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-093104 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-093104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-093104
E1107 09:31:33.990713    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-093104: (2.643353155s)
--- PASS: TestDockerFlags (31.62s)

                                                
                                    
x
+
TestForceSystemdFlag (30.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-092652 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E1107 09:27:00.182542    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-092652 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (27.61002953s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-092652 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-092652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-092652
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-092652: (2.624927087s)
--- PASS: TestForceSystemdFlag (30.70s)

                                                
                                    
x
+
TestForceSystemdEnv (31.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-092749 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1107 09:28:03.196868    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-092749 --memory=2048 --alsologtostderr -v=5 --driver=docker : (28.69478727s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-092749 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-092749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-092749
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-092749: (2.611307538s)
--- PASS: TestForceSystemdEnv (31.78s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.87s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.87s)

                                                
                                    
x
+
TestErrorSpam/setup (27.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-084931 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-084931 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 --driver=docker : (27.443596101s)
--- PASS: TestErrorSpam/setup (27.44s)

                                                
                                    
x
+
TestErrorSpam/start (2.45s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 start --dry-run
--- PASS: TestErrorSpam/start (2.45s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (12.97s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 stop: (12.32287572s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-084931 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-084931 stop
--- PASS: TestErrorSpam/stop (12.97s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/test/nested/copy/3267/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-085021 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-085021 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (51.152634928s)
--- PASS: TestFunctional/serial/StartWithProxy (51.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (68.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-085021 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-085021 --alsologtostderr -v=8: (1m8.592518441s)
functional_test.go:656: soft start took 1m8.592941705s for "functional-085021" cluster.
--- PASS: TestFunctional/serial/SoftStart (68.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-085021 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 cache add k8s.gcr.io/pause:3.1: (2.037876333s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 cache add k8s.gcr.io/pause:3.3: (2.130433055s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 cache add k8s.gcr.io/pause:latest: (1.759150905s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-085021 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2436144078/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cache add minikube-local-cache-test:functional-085021
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 cache add minikube-local-cache-test:functional-085021: (1.275221586s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cache delete minikube-local-cache-test:functional-085021
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-085021
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (385.59794ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 cache reload: (1.165930151s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 kubectl -- --context functional-085021 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-085021 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-085021 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 08:53:03.056035    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:03.102983    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:03.113136    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:03.133486    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:03.173616    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:03.254013    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:03.414091    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:03.734187    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:04.376183    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:05.656681    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:08.218063    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:53:13.338525    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-085021 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.576872689s)
functional_test.go:754: restart took 44.577038939s for "functional-085021" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-085021 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 logs: (2.909914952s)
--- PASS: TestFunctional/serial/LogsCmd (2.91s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd859977764/001/logs.txt
E1107 08:53:23.581076    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd859977764/001/logs.txt: (3.039490533s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 config get cpus: exit status 14 (56.192951ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 config get cpus: exit status 14 (56.06205ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-085021 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-085021 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 5730: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-085021 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-085021 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (613.342072ms)

                                                
                                                
-- stdout --
	* [functional-085021] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 08:54:34.824341    5649 out.go:296] Setting OutFile to fd 1 ...
	I1107 08:54:34.824551    5649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:54:34.824558    5649 out.go:309] Setting ErrFile to fd 2...
	I1107 08:54:34.824562    5649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:54:34.824677    5649 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 08:54:34.825197    5649 out.go:303] Setting JSON to false
	I1107 08:54:34.846214    5649 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1449,"bootTime":1667838625,"procs":383,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 08:54:34.846308    5649 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 08:54:34.868810    5649 out.go:177] * [functional-085021] minikube v1.28.0 on Darwin 13.0
	I1107 08:54:34.890756    5649 notify.go:220] Checking for updates...
	I1107 08:54:34.912341    5649 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 08:54:34.934453    5649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 08:54:34.955536    5649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 08:54:34.976523    5649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 08:54:34.998584    5649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 08:54:35.021215    5649 config.go:180] Loaded profile config "functional-085021": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 08:54:35.021847    5649 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 08:54:35.083767    5649 docker.go:137] docker version: linux-20.10.20
	I1107 08:54:35.083912    5649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:54:35.227855    5649 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-07 16:54:35.141524319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:54:35.249739    5649 out.go:177] * Using the docker driver based on existing profile
	I1107 08:54:35.270407    5649 start.go:282] selected driver: docker
	I1107 08:54:35.270429    5649 start.go:808] validating driver "docker" against &{Name:functional-085021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-085021 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 08:54:35.270546    5649 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 08:54:35.294271    5649 out.go:177] 
	W1107 08:54:35.315839    5649 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 08:54:35.359256    5649 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-085021 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-085021 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-085021 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (871.6576ms)

                                                
                                                
-- stdout --
	* [functional-085021] minikube v1.28.0 sur Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 08:54:35.315902    5664 out.go:296] Setting OutFile to fd 1 ...
	I1107 08:54:35.338384    5664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:54:35.338441    5664 out.go:309] Setting ErrFile to fd 2...
	I1107 08:54:35.338451    5664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 08:54:35.338700    5664 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 08:54:35.359731    5664 out.go:303] Setting JSON to false
	I1107 08:54:35.380110    5664 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1450,"bootTime":1667838625,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1107 08:54:35.380230    5664 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1107 08:54:35.401459    5664 out.go:177] * [functional-085021] minikube v1.28.0 sur Darwin 13.0
	I1107 08:54:35.443715    5664 notify.go:220] Checking for updates...
	I1107 08:54:35.465706    5664 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 08:54:35.508385    5664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	I1107 08:54:35.572738    5664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 08:54:35.636372    5664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 08:54:35.678358    5664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	I1107 08:54:35.700528    5664 config.go:180] Loaded profile config "functional-085021": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 08:54:35.701216    5664 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 08:54:35.810478    5664 docker.go:137] docker version: linux-20.10.20
	I1107 08:54:35.810636    5664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 08:54:35.955938    5664 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:56 SystemTime:2022-11-07 16:54:35.869264321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 08:54:35.976983    5664 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 08:54:35.997851    5664 start.go:282] selected driver: docker
	I1107 08:54:35.997882    5664 start.go:808] validating driver "docker" against &{Name:functional-085021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-085021 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 08:54:35.997990    5664 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 08:54:36.021781    5664 out.go:177] 
	W1107 08:54:36.043140    5664 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 08:54:36.064892    5664 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (18.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-085021 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-085021 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-qk55b" [df2a9036-030b-47d4-b9c4-1062440dd132] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-qk55b" [df2a9036-030b-47d4-b9c4-1062440dd132] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.008016053s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 service --namespace=default --https --url hello-node
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 service --namespace=default --https --url hello-node: (2.026578627s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50283
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 service hello-node --url --format={{.IP}}
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 service hello-node --url --format={{.IP}}: (2.026522151s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 service hello-node --url: (2.027496981s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50315
--- PASS: TestFunctional/parallel/ServiceCmd (18.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [63d2fa15-9469-4048-94fb-b209901cc014] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010184811s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-085021 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-085021 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-085021 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-085021 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [3b3a9a07-76e5-4354-b484-c31be99e95bb] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [3b3a9a07-76e5-4354-b484-c31be99e95bb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [3b3a9a07-76e5-4354-b484-c31be99e95bb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007121562s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-085021 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-085021 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-085021 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [4d4ac820-7e54-4630-a0df-6b2a44cfcdf5] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4d4ac820-7e54-4630-a0df-6b2a44cfcdf5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [4d4ac820-7e54-4630-a0df-6b2a44cfcdf5] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00685811s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-085021 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh -n functional-085021 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 cp functional-085021:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd3697968557/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh -n functional-085021 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-085021 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-fmfc9" [e01332d8-c1bd-42f5-9e1b-d7bdecd146f3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-fmfc9" [e01332d8-c1bd-42f5-9e1b-d7bdecd146f3] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.013955476s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-085021 exec mysql-596b7fcdbf-fmfc9 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-085021 exec mysql-596b7fcdbf-fmfc9 -- mysql -ppassword -e "show databases;": exit status 1 (127.88677ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-085021 exec mysql-596b7fcdbf-fmfc9 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-085021 exec mysql-596b7fcdbf-fmfc9 -- mysql -ppassword -e "show databases;": exit status 1 (114.304339ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-085021 exec mysql-596b7fcdbf-fmfc9 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-085021 exec mysql-596b7fcdbf-fmfc9 -- mysql -ppassword -e "show databases;": exit status 1 (110.074397ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-085021 exec mysql-596b7fcdbf-fmfc9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/3267/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo cat /etc/test/nested/copy/3267/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/3267.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo cat /etc/ssl/certs/3267.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/3267.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo cat /usr/share/ca-certificates/3267.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/32672.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo cat /etc/ssl/certs/32672.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/32672.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo cat /usr/share/ca-certificates/32672.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-085021 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 ssh "sudo systemctl is-active crio": exit status 1 (573.369648ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-085021 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-085021
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-085021
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-085021 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/google-containers/addon-resizer      | functional-085021 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-085021 | d05b07c630b55 | 1.24MB |
| docker.io/library/nginx                     | alpine            | b997307a58ab5 | 23.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-085021 | 94f5f640a83c0 | 30B    |
| docker.io/library/mysql                     | 5.7               | eef0fab001e8d | 495MB  |
| docker.io/library/nginx                     | latest            | 76c69feac34e8 | 142MB  |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
2022/11/07 08:54:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-085021 image ls --format json:
[{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-085021"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"d05b07c630b557feac30529544ddf647c6bf9a0c78c72e10bdc67672c87fb4e6","repoDig
ests":[],"repoTags":["docker.io/localhost/my-image:functional-085021"],"size":"1240000"},{"id":"76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"94f5f640a83c087265036ea1a26db2bb207ad65a3f12e55514f0ba028a3c839e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-085021"],"size":"30"},{"id":"eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],
"size":"23600000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-085021 image ls --format yaml:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-085021
size: "32900000"
- id: 94f5f640a83c087265036ea1a26db2bb207ad65a3f12e55514f0ba028a3c839e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-085021
size: "30"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23600000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 ssh pgrep buildkitd: exit status 1 (375.33057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image build -t localhost/my-image:functional-085021 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 image build -t localhost/my-image:functional-085021 testdata/build: (3.12081958s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-085021 image build -t localhost/my-image:functional-085021 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 9335b0f18df0
Removing intermediate container 9335b0f18df0
---> 437f52d8806c
Step 3/3 : ADD content.txt /
---> d05b07c630b5
Successfully built d05b07c630b5
Successfully tagged localhost/my-image:functional-085021
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.452153561s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-085021
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-085021 docker-env) && out/minikube-darwin-amd64 status -p functional-085021"
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-085021 docker-env) && out/minikube-darwin-amd64 status -p functional-085021": (1.195803069s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-085021 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image load --daemon gcr.io/google-containers/addon-resizer:functional-085021

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 image load --daemon gcr.io/google-containers/addon-resizer:functional-085021: (3.079094276s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image load --daemon gcr.io/google-containers/addon-resizer:functional-085021

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 image load --daemon gcr.io/google-containers/addon-resizer:functional-085021: (2.060277651s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.933954284s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-085021
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image load --daemon gcr.io/google-containers/addon-resizer:functional-085021
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 image load --daemon gcr.io/google-containers/addon-resizer:functional-085021: (4.279043625s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image save gcr.io/google-containers/addon-resizer:functional-085021 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 image save gcr.io/google-containers/addon-resizer:functional-085021 /Users/jenkins/workspace/addon-resizer-save.tar: (1.698507985s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image rm gcr.io/google-containers/addon-resizer:functional-085021
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image load /Users/jenkins/workspace/addon-resizer-save.tar
E1107 08:53:44.061597    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.826219297s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-085021
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 image save --daemon gcr.io/google-containers/addon-resizer:functional-085021
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-085021 image save --daemon gcr.io/google-containers/addon-resizer:functional-085021: (3.006617458s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-085021
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-085021 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-085021 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [a41563c1-8514-4248-8666-98024c956fdc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [a41563c1-8514-4248-8666-98024c956fdc] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.010456685s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-085021 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-085021 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 5334: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "413.814911ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "80.724235ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "416.779763ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "82.254097ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-085021 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3594954190/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667840062730303000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3594954190/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667840062730303000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3594954190/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667840062730303000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3594954190/001/test-1667840062730303000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.228604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 16:54 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 16:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 16:54 test-1667840062730303000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh cat /mount-9p/test-1667840062730303000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-085021 replace --force -f testdata/busybox-mount-test.yaml
E1107 08:54:25.022528    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [def3e7f4-3a3b-4505-863d-04c4e9efe67c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [def3e7f4-3a3b-4505-863d-04c4e9efe67c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [def3e7f4-3a3b-4505-863d-04c4e9efe67c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [def3e7f4-3a3b-4505-863d-04c4e9efe67c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.009661259s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-085021 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-085021 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3594954190/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-085021 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port356688254/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (428.374689ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-085021 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port356688254/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-085021 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-085021 ssh "sudo umount -f /mount-9p": exit status 1 (373.044143ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-085021 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-085021 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port356688254/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-085021
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-085021
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-085021
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-090209 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-090209 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (46.01444484s)
--- PASS: TestJSONOutput/start/Command (46.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-090209 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-090209 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.25s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-090209 --output=json --user=testUser
E1107 09:03:03.119529    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-090209 --output=json --user=testUser: (12.25429954s)
--- PASS: TestJSONOutput/stop/Command (12.25s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.73s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-090311 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-090311 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (336.580457ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0016a00a-0ca9-44fc-a0ff-1fd2e8bdd6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-090311] minikube v1.28.0 on Darwin 13.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b5ce8f2-4756-4e64-ad67-2b8a0f7a20c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"56c8ef36-12f8-4e66-8956-4917740b1b57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig"}}
	{"specversion":"1.0","id":"e51c302c-c0ff-4bc9-9f78-1682bb247de6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"63e773e1-cd27-487a-87d4-748467d61e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2872310f-61cf-430f-8b50-295f5d3cd9e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube"}}
	{"specversion":"1.0","id":"3a5b0451-74b8-43a9-8ac0-b9b3e94aecf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-090311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-090311
--- PASS: TestErrorJSONOutput (0.73s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-090312 --network=
E1107 09:03:30.849153    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-090312 --network=: (26.600133846s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-090312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-090312
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-090312: (2.590643631s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-090341 --network=bridge
E1107 09:03:58.546241    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-090341 --network=bridge: (27.090790146s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-090341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-090341
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-090341: (2.425857892s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.57s)

                                                
                                    
x
+
TestKicExistingNetwork (30.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-090411 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-090411 --network=existing-network: (27.454070794s)
helpers_test.go:175: Cleaning up "existing-network-090411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-090411
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-090411: (2.3875233s)
--- PASS: TestKicExistingNetwork (30.20s)

                                                
                                    
x
+
TestKicCustomSubnet (29.28s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-090441 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-090441 --subnet=192.168.60.0/24: (26.627422145s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-090441 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-090441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-090441
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-090441: (2.601541286s)
--- PASS: TestKicCustomSubnet (29.28s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (62.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-090510 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-090510 --driver=docker : (27.724829641s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-090510 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-090510 --driver=docker : (28.183066126s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-090510
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-090510
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-090510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-090510
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-090510: (2.587915934s)
helpers_test.go:175: Cleaning up "first-090510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-090510
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-090510: (2.608150602s)
--- PASS: TestMinikubeProfile (62.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-090613 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-090613 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.371747136s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-090613 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-090613 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-090613 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.135171053s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-090613 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.13s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-090613 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-090613 --alsologtostderr -v=5: (2.129748928s)
--- PASS: TestMountStart/serial/DeleteFirst (2.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-090613 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-090613
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-090613: (1.537503184s)
--- PASS: TestMountStart/serial/Stop (1.54s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-090613
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-090613: (4.309485299s)
--- PASS: TestMountStart/serial/RestartStopped (5.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-090613 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-090641 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1107 09:08:03.127564    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-090641 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m24.677536777s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-090641 -- rollout status deployment/busybox: (3.948304157s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-4wm8f -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-fc9kt -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-4wm8f -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-fc9kt -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-4wm8f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-fc9kt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-4wm8f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-4wm8f -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-fc9kt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-090641 -- exec busybox-65db55d5d6-fc9kt -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-090641 -v 3 --alsologtostderr
E1107 09:08:30.856750    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-090641 -v 3 --alsologtostderr: (25.340395s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.31s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-090641 status --output json --alsologtostderr: (1.013304309s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp testdata/cp-test.txt multinode-090641:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile2112461042/001/cp-test_multinode-090641.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641:/home/docker/cp-test.txt multinode-090641-m02:/home/docker/cp-test_multinode-090641_multinode-090641-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m02 "sudo cat /home/docker/cp-test_multinode-090641_multinode-090641-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641:/home/docker/cp-test.txt multinode-090641-m03:/home/docker/cp-test_multinode-090641_multinode-090641-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m03 "sudo cat /home/docker/cp-test_multinode-090641_multinode-090641-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp testdata/cp-test.txt multinode-090641-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile2112461042/001/cp-test_multinode-090641-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641-m02:/home/docker/cp-test.txt multinode-090641:/home/docker/cp-test_multinode-090641-m02_multinode-090641.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641 "sudo cat /home/docker/cp-test_multinode-090641-m02_multinode-090641.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641-m02:/home/docker/cp-test.txt multinode-090641-m03:/home/docker/cp-test_multinode-090641-m02_multinode-090641-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m03 "sudo cat /home/docker/cp-test_multinode-090641-m02_multinode-090641-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp testdata/cp-test.txt multinode-090641-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile2112461042/001/cp-test_multinode-090641-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641-m03:/home/docker/cp-test.txt multinode-090641:/home/docker/cp-test_multinode-090641-m03_multinode-090641.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641 "sudo cat /home/docker/cp-test_multinode-090641-m03_multinode-090641.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 cp multinode-090641-m03:/home/docker/cp-test.txt multinode-090641-m02:/home/docker/cp-test_multinode-090641-m03_multinode-090641-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 ssh -n multinode-090641-m02 "sudo cat /home/docker/cp-test_multinode-090641-m03_multinode-090641-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (13.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-090641 node stop m03: (12.282334891s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-090641 status: exit status 7 (741.637425ms)

                                                
                                                
-- stdout --
	multinode-090641
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-090641-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-090641-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-090641 status --alsologtostderr: exit status 7 (737.074754ms)

                                                
                                                
-- stdout --
	multinode-090641
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-090641-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-090641-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 09:09:07.445213    9053 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:09:07.445390    9053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:09:07.445398    9053 out.go:309] Setting ErrFile to fd 2...
	I1107 09:09:07.445402    9053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:09:07.445540    9053 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:09:07.445767    9053 out.go:303] Setting JSON to false
	I1107 09:09:07.445794    9053 mustload.go:65] Loading cluster: multinode-090641
	I1107 09:09:07.445853    9053 notify.go:220] Checking for updates...
	I1107 09:09:07.446127    9053 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:09:07.446140    9053 status.go:255] checking status of multinode-090641 ...
	I1107 09:09:07.446568    9053 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:09:07.505028    9053 status.go:330] multinode-090641 host status = "Running" (err=<nil>)
	I1107 09:09:07.505056    9053 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:09:07.505324    9053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641
	I1107 09:09:07.562974    9053 host.go:66] Checking if "multinode-090641" exists ...
	I1107 09:09:07.563259    9053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:09:07.563336    9053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:09:07.620889    9053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51024 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641/id_rsa Username:docker}
	I1107 09:09:07.704909    9053 ssh_runner.go:195] Run: systemctl --version
	I1107 09:09:07.709321    9053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:09:07.718659    9053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-090641
	I1107 09:09:07.776100    9053 kubeconfig.go:92] found "multinode-090641" server: "https://127.0.0.1:51023"
	I1107 09:09:07.776127    9053 api_server.go:165] Checking apiserver status ...
	I1107 09:09:07.776187    9053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 09:09:07.787296    9053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1654/cgroup
	W1107 09:09:07.795017    9053 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1654/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1107 09:09:07.795081    9053 ssh_runner.go:195] Run: ls
	I1107 09:09:07.798746    9053 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51023/healthz ...
	I1107 09:09:07.804120    9053 api_server.go:278] https://127.0.0.1:51023/healthz returned 200:
	ok
	I1107 09:09:07.804134    9053 status.go:421] multinode-090641 apiserver status = Running (err=<nil>)
	I1107 09:09:07.804144    9053 status.go:257] multinode-090641 status: &{Name:multinode-090641 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 09:09:07.804158    9053 status.go:255] checking status of multinode-090641-m02 ...
	I1107 09:09:07.804470    9053 cli_runner.go:164] Run: docker container inspect multinode-090641-m02 --format={{.State.Status}}
	I1107 09:09:07.861296    9053 status.go:330] multinode-090641-m02 host status = "Running" (err=<nil>)
	I1107 09:09:07.861317    9053 host.go:66] Checking if "multinode-090641-m02" exists ...
	I1107 09:09:07.861584    9053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-090641-m02
	I1107 09:09:07.917517    9053 host.go:66] Checking if "multinode-090641-m02" exists ...
	I1107 09:09:07.917774    9053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 09:09:07.917840    9053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-090641-m02
	I1107 09:09:07.976197    9053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51082 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/multinode-090641-m02/id_rsa Username:docker}
	I1107 09:09:08.060412    9053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 09:09:08.069849    9053 status.go:257] multinode-090641-m02 status: &{Name:multinode-090641-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 09:09:08.069866    9053 status.go:255] checking status of multinode-090641-m03 ...
	I1107 09:09:08.070142    9053 cli_runner.go:164] Run: docker container inspect multinode-090641-m03 --format={{.State.Status}}
	I1107 09:09:08.127091    9053 status.go:330] multinode-090641-m03 host status = "Stopped" (err=<nil>)
	I1107 09:09:08.127124    9053 status.go:343] host is not running, skipping remaining checks
	I1107 09:09:08.127131    9053 status.go:257] multinode-090641-m03 status: &{Name:multinode-090641-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (13.76s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 node start m03 --alsologtostderr
E1107 09:09:26.216621    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-090641 node start m03 --alsologtostderr: (18.149083369s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (112.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-090641
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-090641
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-090641: (36.500736551s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-090641 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-090641 --wait=true -v=8 --alsologtostderr: (1m16.25651142s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-090641
--- PASS: TestMultiNode/serial/RestartKeepsNodes (112.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (16.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-090641 node delete m03: (15.987831167s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (16.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-090641 stop: (24.546618141s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-090641 status: exit status 7 (168.165956ms)

                                                
                                                
-- stdout --
	multinode-090641
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-090641-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-090641 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-090641 status --alsologtostderr: exit status 7 (165.706872ms)

                                                
                                                
-- stdout --
	multinode-090641
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-090641-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 09:12:01.841790    9670 out.go:296] Setting OutFile to fd 1 ...
	I1107 09:12:01.841965    9670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:12:01.841972    9670 out.go:309] Setting ErrFile to fd 2...
	I1107 09:12:01.841976    9670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 09:12:01.842089    9670 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
	I1107 09:12:01.842288    9670 out.go:303] Setting JSON to false
	I1107 09:12:01.842317    9670 mustload.go:65] Loading cluster: multinode-090641
	I1107 09:12:01.842362    9670 notify.go:220] Checking for updates...
	I1107 09:12:01.842650    9670 config.go:180] Loaded profile config "multinode-090641": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 09:12:01.842661    9670 status.go:255] checking status of multinode-090641 ...
	I1107 09:12:01.843080    9670 cli_runner.go:164] Run: docker container inspect multinode-090641 --format={{.State.Status}}
	I1107 09:12:01.897723    9670 status.go:330] multinode-090641 host status = "Stopped" (err=<nil>)
	I1107 09:12:01.897740    9670 status.go:343] host is not running, skipping remaining checks
	I1107 09:12:01.897745    9670 status.go:257] multinode-090641 status: &{Name:multinode-090641 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 09:12:01.897771    9670 status.go:255] checking status of multinode-090641-m02 ...
	I1107 09:12:01.898045    9670 cli_runner.go:164] Run: docker container inspect multinode-090641-m02 --format={{.State.Status}}
	I1107 09:12:01.953556    9670 status.go:330] multinode-090641-m02 host status = "Stopped" (err=<nil>)
	I1107 09:12:01.953576    9670 status.go:343] host is not running, skipping remaining checks
	I1107 09:12:01.953585    9670 status.go:257] multinode-090641-m02 status: &{Name:multinode-090641-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-090641
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-090641-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-090641-m02 --driver=docker : exit status 14 (357.127945ms)

                                                
                                                
-- stdout --
	* [multinode-090641-m02] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-090641-m02' is duplicated with machine name 'multinode-090641-m02' in profile 'multinode-090641'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-090641-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-090641-m03 --driver=docker : (27.269298344s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-090641
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-090641: exit status 80 (488.322809ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-090641
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-090641-m03 already exists in multinode-090641-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-090641-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-090641-m03: (2.576643508s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.75s)

                                                
                                    
x
+
TestPreload (142.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-091545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-091545 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (55.918177818s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-091545 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-091545 -- docker pull gcr.io/k8s-minikube/busybox: (2.450591672s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-091545 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6
E1107 09:18:03.141837    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-091545 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6: (1m20.830678585s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-091545 -- docker images
helpers_test.go:175: Cleaning up "test-preload-091545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-091545
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-091545: (2.812117088s)
--- PASS: TestPreload (142.43s)

                                                
                                    
x
+
TestScheduledStopUnix (101.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-091807 --memory=2048 --driver=docker 
E1107 09:18:30.901207    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-091807 --memory=2048 --driver=docker : (26.898087976s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-091807 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-091807 -n scheduled-stop-091807
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-091807 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-091807 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-091807 -n scheduled-stop-091807
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-091807
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-091807 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-091807
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-091807: exit status 7 (116.327025ms)

                                                
                                                
-- stdout --
	scheduled-stop-091807
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-091807 -n scheduled-stop-091807
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-091807 -n scheduled-stop-091807: exit status 7 (110.540419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-091807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-091807
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-091807: (2.286204242s)
--- PASS: TestScheduledStopUnix (101.19s)

                                                
                                    
x
+
TestSkaffold (62.45s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3307425496 version
skaffold_test.go:63: skaffold version: v2.0.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-091948 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-091948 --memory=2600 --driver=docker : (27.831939278s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3307425496 run --minikube-profile skaffold-091948 --kube-context skaffold-091948 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3307425496 run --minikube-profile skaffold-091948 --kube-context skaffold-091948 --status-check=true --port-forward=false --interactive=false: (20.072851593s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-595bbfc99-vdx46" [efab2c45-d339-46cd-a0b0-4fa69f57d266] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013799233s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-65d7c96d7d-5npgg" [024770dd-b5ad-4f77-aa7c-6033b3afd763] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008687947s
helpers_test.go:175: Cleaning up "skaffold-091948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-091948
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-091948: (2.864598613s)
--- PASS: TestSkaffold (62.45s)

                                                
                                    
x
+
TestInsufficientStorage (12.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-092051 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-092051 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.653686039s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ecf3459e-8d76-47e5-b270-a1b2b360ab6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-092051] minikube v1.28.0 on Darwin 13.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a33e6bc8-e257-4951-843f-01294bce61fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"6ed8b0a4-57f8-42d4-839a-e9bcfe8c486c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig"}}
	{"specversion":"1.0","id":"4467ca0c-39a2-4d7a-bd74-56154325639a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7a9a624a-9bdf-4aec-bfcd-e2ff61757a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0a71c5e7-ec1c-480b-9e95-adc6b828e641","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube"}}
	{"specversion":"1.0","id":"accac0ce-2112-4871-958f-dda9ac6cd153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8cdd12dd-c242-47af-ba36-2d9d059a1708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9fd8060e-d155-4e46-ac56-718a8f3a8758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"13cd4440-6546-43c0-bbbe-46b1b135483d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"76776598-24e8-4e0f-9dbe-6077500781c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-092051 in cluster insufficient-storage-092051","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fa73cd5-601a-4ad5-b33b-b206e70755f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"49f640e7-b87e-4271-83c1-9a3ac08a33ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8524ce8a-4cbe-4b26-aba3-657864ed79e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-092051 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-092051 --output=json --layout=cluster: exit status 7 (384.619938ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-092051","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-092051","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:21:01.169599   11384 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-092051" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-092051 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-092051 --output=json --layout=cluster: exit status 7 (383.386078ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-092051","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-092051","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 09:21:01.553606   11394 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-092051" does not appear in /Users/jenkins/minikube-integration/15310-2115/kubeconfig
	E1107 09:21:01.562265   11394 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/insufficient-storage-092051/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-092051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-092051
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-092051: (2.374213649s)
--- PASS: TestInsufficientStorage (12.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-092210
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-092210: (3.588963657s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                    
x
+
TestPause/serial/Start (43.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-092353 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-092353 --memory=2048 --install-addons=false --wait=all --driver=docker : (43.409555194s)
--- PASS: TestPause/serial/Start (43.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-092353 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-092353 --alsologtostderr -v=1 --driver=docker : (51.105422657s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-092353 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-092353 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-092353 --output=json --layout=cluster: exit status 2 (410.236103ms)

                                                
                                                
-- stdout --
	{"Name":"pause-092353","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-092353","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-092353 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-092353 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.6s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-092353 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-092353 --alsologtostderr -v=5: (2.594951699s)
--- PASS: TestPause/serial/DeletePaused (2.60s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-092353
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-092353: exit status 1 (53.259442ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-092353

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-092534 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-092534 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (374.585435ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-092534] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15310
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-092534 --driver=docker 
E1107 09:25:38.259506    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:38.265004    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:38.275067    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:38.295276    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:38.335429    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:38.415528    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:38.575723    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:38.895866    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:39.536190    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:40.816330    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:43.376586    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:48.497147    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:25:58.738074    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-092534 --driver=docker : (27.83496232s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-092534 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-092534 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-092534 --no-kubernetes --driver=docker : (14.689096068s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-092534 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-092534 status -o json: exit status 2 (425.750579ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-092534","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-092534
E1107 09:26:19.221097    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-092534: (2.46156568s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-092534 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-092534 --no-kubernetes --driver=docker : (6.635221274s)
--- PASS: TestNoKubernetes/serial/Start (6.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-092534 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-092534 "sudo systemctl is-active --quiet service kubelet": exit status 1 (377.583344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (15.431188972s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-092534
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-092534: (1.614527663s)
--- PASS: TestNoKubernetes/serial/Stop (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-092534 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-092534 --driver=docker : (4.126773613s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-092534 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-092534 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.220206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15310
- KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2080666011/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2080666011/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2080666011/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2080666011/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15310
- KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4101894394/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4101894394/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4101894394/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4101894394/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (44.277275847s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-092104 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-092104 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (59.678154169s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-092103 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-092103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-hxkn8" [12bda5b5-578b-4e35-b7ab-9c4b20f7977e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-hxkn8" [12bda5b5-578b-4e35-b7ab-9c4b20f7977e] Running
E1107 09:33:03.205490    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005741131s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-092103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.124143663s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (74.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-092105 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-092105 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m14.114666036s)
--- PASS: TestNetworkPlugins/group/cilium/Start (74.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-hmdtg" [889d4c83-8d0a-4c78-9b6e-2f5d2a9339d6] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.017531336s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-092104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-092104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-72nc4" [2006c62d-48fc-4e32-a686-16c3c1ea27a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1107 09:33:30.939703    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-72nc4" [2006c62d-48fc-4e32-a686-16c3c1ea27a2] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008011451s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-092104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-092104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-092104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (324.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-092105 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-092105 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (5m24.566863166s)
--- PASS: TestNetworkPlugins/group/calico/Start (324.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-pwhlw" [225e6e09-43ce-43c3-808e-a865cb788a12] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017336048s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-092105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-092105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-2wmpj" [9d75c8b4-ffc9-43b0-9ac4-2a829eccd702] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-2wmpj" [9d75c8b4-ffc9-43b0-9ac4-2a829eccd702] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 14.00691772s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-092105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-092105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-092105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (80.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-092104 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E1107 09:35:38.282680    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-092104 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m20.652092817s)
--- PASS: TestNetworkPlugins/group/false/Start (80.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-092104 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-092104 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vfd8x" [00c75cf1-e50c-4496-ba06-bebde5774437] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-vfd8x" [00c75cf1-e50c-4496-ba06-bebde5774437] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.01138173s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-092104 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-092104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-092104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-092104 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.108865495s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (46.506012639s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-092103 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-092103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-4rqxl" [bb293c20-30a7-4099-8717-387e4e6135e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-4rqxl" [bb293c20-30a7-4099-8717-387e4e6135e5] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.008130766s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-092103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E1107 09:37:53.268102    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:53.274567    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:53.284714    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:53.306791    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:53.347763    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:53.428050    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:53.588139    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:53.909501    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:54.549938    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:55.830463    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:37:58.390705    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:38:03.219588    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 09:38:03.512119    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:38:13.752821    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:38:21.640221    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:21.646506    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:21.658254    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:21.679103    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:21.719867    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:21.801914    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:21.963523    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:22.283667    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:22.924290    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:24.205471    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:26.765774    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:30.950706    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 09:38:31.887238    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:38:34.233601    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:38:42.127758    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
E1107 09:39:02.609354    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (1m31.436299492s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-nln8z" [ea134e94-8c7f-4105-9b29-c71cef263afc] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016783039s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-092103 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-092103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-hdzlr" [264a591d-35cc-4855-ad29-6c7d26d2c574] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-hdzlr" [264a591d-35cc-4855-ad29-6c7d26d2c574] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.008429972s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-092105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-092105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-cb2bt" [7f2a7e4e-148b-4454-ae3f-b8306dfd49f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1107 09:39:15.195511    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-cb2bt" [7f2a7e4e-148b-4454-ae3f-b8306dfd49f3] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.017938849s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-092103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-092105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (46.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-092103 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (46.387259368s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (46.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-092105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-092105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-092103 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-092103 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-ks6zs" [17dcb7f4-85df-407e-9705-78e136782c15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-ks6zs" [17dcb7f4-85df-407e-9705-78e136782c15] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.008763386s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-092103 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-094130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E1107 09:41:32.884880    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:41:53.366397    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:42:01.352560    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:42:11.356202    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:42:20.660745    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:20.666774    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:20.678836    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:20.699184    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:20.740121    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:20.820439    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:20.982634    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:21.303477    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:21.944370    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:23.224860    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:25.785372    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:30.905693    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:34.327862    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:42:41.147373    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:42:46.318209    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 09:42:53.276951    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-094130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (1m28.50122469s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-094130 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [b8dc5c03-84a5-4ccf-9e5e-a5616093b2e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1107 09:43:01.629167    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
helpers_test.go:342: "busybox" [b8dc5c03-84a5-4ccf-9e5e-a5616093b2e0] Running
E1107 09:43:03.229284    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.014916005s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-094130 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-094130 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-094130 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-094130 --alsologtostderr -v=3
E1107 09:43:20.964250    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:43:21.650259    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kindnet-092104/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-094130 --alsologtostderr -v=3: (12.440961864s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-094130 -n no-preload-094130
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-094130 -n no-preload-094130: exit status 7 (113.248832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-094130 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (299.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-094130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E1107 09:43:30.959758    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-094130 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (4m59.457068051s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-094130 -n no-preload-094130
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (299.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-093929 --alsologtostderr -v=3
E1107 09:45:12.777174    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:12.783626    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:12.793837    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:12.816015    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:12.856371    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:12.937987    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:13.099028    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:45:13.420320    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-093929 --alsologtostderr -v=3: (1.58813153s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-093929 -n old-k8s-version-093929: exit status 7 (114.56421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-093929 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-8dhbt" [c62b8b9e-fe59-412d-b397-78681e98ce7f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-8dhbt" [c62b8b9e-fe59-412d-b397-78681e98ce7f] Running
E1107 09:48:30.973353    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.016314539s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-8dhbt" [c62b8b9e-fe59-412d-b397-78681e98ce7f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005494486s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-094130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-094130 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-094130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-094130 -n no-preload-094130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-094130 -n no-preload-094130: exit status 2 (430.685168ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-094130 -n no-preload-094130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-094130 -n no-preload-094130: exit status 2 (408.618271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-094130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-094130 -n no-preload-094130
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-094130 -n no-preload-094130
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-094848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E1107 09:49:05.977657    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
E1107 09:49:07.768112    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
E1107 09:49:27.532573    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/cilium-092105/client.crt: no such file or directory
E1107 09:49:33.673459    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/calico-092105/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-094848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (45.320817167s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-094848 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5201a0e6-13e3-47a8-8b25-055d3d67d0e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1107 09:49:35.462770    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/enable-default-cni-092103/client.crt: no such file or directory
helpers_test.go:342: "busybox" [5201a0e6-13e3-47a8-8b25-055d3d67d0e7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.01818396s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-094848 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-094848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-094848 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-094848 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-094848 --alsologtostderr -v=3: (12.398538277s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-094848 -n embed-certs-094848
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-094848 -n embed-certs-094848: exit status 7 (111.649337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-094848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-094848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E1107 09:50:12.790283    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:50:38.315778    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/skaffold-091948/client.crt: no such file or directory
E1107 09:50:40.485050    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
E1107 09:51:12.421507    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
E1107 09:52:20.686186    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/bridge-092103/client.crt: no such file or directory
E1107 09:52:53.301266    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory
E1107 09:52:58.825351    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:52:58.830655    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:52:58.841148    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:52:58.862268    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:52:58.902961    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:52:58.985147    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:52:59.145366    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:52:59.465933    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:53:00.106523    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:53:01.387047    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:53:03.252774    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 09:53:03.949358    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory
E1107 09:53:09.070688    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/no-preload-094130/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-094848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (4m57.928214307s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-094848 -n embed-certs-094848
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-lpnxs" [b2f4c1ee-d05b-4bfe-91e0-8b6e2b2f8a0a] Pending
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-lpnxs" [b2f4c1ee-d05b-4bfe-91e0-8b6e2b2f8a0a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-lpnxs" [b2f4c1ee-d05b-4bfe-91e0-8b6e2b2f8a0a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.013954695s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-lpnxs" [b2f4c1ee-d05b-4bfe-91e0-8b6e2b2f8a0a] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007077867s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-094848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-094848 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-094848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-094848 -n embed-certs-094848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-094848 -n embed-certs-094848: exit status 2 (411.068035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-094848 -n embed-certs-094848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-094848 -n embed-certs-094848: exit status 2 (408.97213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-094848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-094848 -n embed-certs-094848
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-094848 -n embed-certs-094848
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-095521 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-095521 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (44.900127471s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-095521 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f0883285-0eb6-4192-9a5c-30df13df3ba9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
helpers_test.go:342: "busybox" [f0883285-0eb6-4192-9a5c-30df13df3ba9] Running
E1107 09:56:12.431629    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/false-092104/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.017527812s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-095521 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-095521 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-095521 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-095521 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-095521 --alsologtostderr -v=3: (12.400635192s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521: exit status 7 (114.195992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-095521 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-095521 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-095521 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (5m1.305142336s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-5gtnz" [b477ed6d-10ed-4270-bc12-a8a90661249d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1107 10:01:35.866692    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/kubenet-092103/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-5gtnz" [b477ed6d-10ed-4270-bc12-a8a90661249d] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.016438411s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-5gtnz" [b477ed6d-10ed-4270-bc12-a8a90661249d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00846414s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-095521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-095521 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-095521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521: exit status 2 (406.745776ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521: exit status 2 (410.485312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-095521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-095521 -n default-k8s-diff-port-095521
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-100155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-100155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (41.025089653s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-100155 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-100155 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-100155 --alsologtostderr -v=3: (12.392353793s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-100155 -n newest-cni-100155
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-100155 -n newest-cni-100155: exit status 7 (112.233556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-100155 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-100155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3
E1107 10:02:53.318892    3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/auto-092103/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-100155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (18.451802334s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-100155 -n newest-cni-100155
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-100155 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-100155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-100155 -n newest-cni-100155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-100155 -n newest-cni-100155: exit status 2 (411.747975ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-100155 -n newest-cni-100155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-100155 -n newest-cni-100155: exit status 2 (408.458728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-100155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-100155 -n newest-cni-100155
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-100155 -n newest-cni-100155
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.10s)

                                                
                                    

Test skip (18/295)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 10.574559ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-dqvwh" [25ee8eb7-3346-4944-8c4a-49e67463cc8c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010107863s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-wzjrg" [b23e2d73-abfe-4fae-bd6e-f89fa89c7bb2] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011290587s
addons_test.go:293: (dbg) Run:  kubectl --context addons-084531 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-084531 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-084531 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.377588825s)
addons_test.go:308: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-084531 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Run:  kubectl --context addons-084531 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context addons-084531 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [843e6cc0-79bc-45c0-a235-8a20ae0c97ac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [843e6cc0-79bc-45c0-a235-8a20ae0c97ac] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007528799s
addons_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p addons-084531 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:235: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.84s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-085021 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-085021 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-jzbn2" [05b071a4-fca9-4508-87c7-787312d731f3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-jzbn2" [05b071a4-fca9-4508-87c7-787312d731f3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006284781s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-092103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-092103
--- SKIP: TestNetworkPlugins/group/flannel (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-092104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-092104
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-095521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-095521
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard